Cài đặt Python tiền xử lý

[x_train, y_train], [x_test, y_test] = cifar10.load_data[]
y_train = utils.to_categorical[y_train, num_classes]
y_test = utils.to_categorical[y_test, num_classes]
datagen = ImageDataGenerator[
    featurewise_center=True,
    featurewise_std_normalization=True,
    rotation_range=20,
    width_shift_range=0.2,
    height_shift_range=0.2,
    horizontal_flip=True,
    validation_split=0.2]
# compute quantities required for featurewise normalization
# [std, mean, and principal components if ZCA whitening is applied]
datagen.fit[x_train]
# fits the model on batches with real-time data augmentation:
model.fit[datagen.flow[x_train, y_train, batch_size=32,
         subset='training'],
         validation_data=datagen.flow[x_train, y_train,
         batch_size=8, subset='validation'],
         steps_per_epoch=len[x_train] / 32, epochs=epochs]
# here's a more "manual" example
for e in range[epochs]:
    print['Epoch', e]
    batches = 0
    for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
        model.fit[x_batch, y_batch]
        batches += 1
        if batches >= len[x_train] / 32:
            # we need to break the loop by hand because
            # the generator loops indefinitely
            break
3 không được đề xuất cho mã mới. Thích tải hình ảnh với
[x_train, y_train], [x_test, y_test] = cifar10.load_data[]
y_train = utils.to_categorical[y_train, num_classes]
y_test = utils.to_categorical[y_test, num_classes]
datagen = ImageDataGenerator[
    featurewise_center=True,
    featurewise_std_normalization=True,
    rotation_range=20,
    width_shift_range=0.2,
    height_shift_range=0.2,
    horizontal_flip=True,
    validation_split=0.2]
# compute quantities required for featurewise normalization
# [std, mean, and principal components if ZCA whitening is applied]
datagen.fit[x_train]
# fits the model on batches with real-time data augmentation:
model.fit[datagen.flow[x_train, y_train, batch_size=32,
         subset='training'],
         validation_data=datagen.flow[x_train, y_train,
         batch_size=8, subset='validation'],
         steps_per_epoch=len[x_train] / 32, epochs=epochs]
# here's a more "manual" example
for e in range[epochs]:
    print['Epoch', e]
    batches = 0
    for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
        model.fit[x_batch, y_batch]
        batches += 1
        if batches >= len[x_train] / 32:
            # we need to break the loop by hand because
            # the generator loops indefinitely
            break
4 và biến đổi đầu ra
[x_train, y_train], [x_test, y_test] = cifar10.load_data[]
y_train = utils.to_categorical[y_train, num_classes]
y_test = utils.to_categorical[y_test, num_classes]
datagen = ImageDataGenerator[
    featurewise_center=True,
    featurewise_std_normalization=True,
    rotation_range=20,
    width_shift_range=0.2,
    height_shift_range=0.2,
    horizontal_flip=True,
    validation_split=0.2]
# compute quantities required for featurewise normalization
# [std, mean, and principal components if ZCA whitening is applied]
datagen.fit[x_train]
# fits the model on batches with real-time data augmentation:
model.fit[datagen.flow[x_train, y_train, batch_size=32,
         subset='training'],
         validation_data=datagen.flow[x_train, y_train,
         batch_size=8, subset='validation'],
         steps_per_epoch=len[x_train] / 32, epochs=epochs]
# here's a more "manual" example
for e in range[epochs]:
    print['Epoch', e]
    batches = 0
    for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
        model.fit[x_batch, y_batch]
        batches += 1
        if batches >= len[x_train] / 32:
            # we need to break the loop by hand because
            # the generator loops indefinitely
            break
5 với các lớp tiền xử lý. Để biết thêm thông tin, hãy xem hướng dẫn tải hình ảnh và tăng cường hình ảnh, cũng như hướng dẫn lớp tiền xử lý

Dữ liệu sẽ được lặp lại [theo đợt]

  • trôi nổi. phần của tổng chiều rộng, nếu < 1 hoặc pixel nếu >= 1
  • giống như mảng 1-D. các phần tử ngẫu nhiên từ mảng
  • int. số nguyên pixel từ khoảng thời gian
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    6 - Với
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    7 giá trị có thể là số nguyên
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    8, giống như với
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    9, trong khi với
    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    0 giá trị có thể là số thực trong khoảng [-1. 0, +1. 0]
  • trôi nổi. phần nhỏ của tổng chiều cao, nếu < 1 hoặc pixel nếu >= 1
  • giống như mảng 1-D. các phần tử ngẫu nhiên từ mảng
  • int. số nguyên pixel từ khoảng
    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    1 - Với
    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    2 giá trị có thể là số nguyên
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    8, giống như với
    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    4, trong khi với
    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    5 giá trị có thể là số float trong khoảng [-1. 0, +1. 0]
  • lập luận

    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    6Boolean. Đặt giá trị trung bình đầu vào thành 0 trên tập dữ liệu, theo tính năng.
    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    7Boolean. Đặt giá trị trung bình của từng mẫu thành 0.
    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    8Boolean. Chia đầu vào cho tiêu chuẩn của tập dữ liệu, tính năng khôn ngoan.
    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    9Boolean. Chia mỗi đầu vào cho tiêu chuẩn của nó.
    # we create two instances with the same arguments
    data_gen_args = dict[featurewise_center=True,
                         featurewise_std_normalization=True,
                         rotation_range=90,
                         width_shift_range=0.1,
                         height_shift_range=0.1,
                         zoom_range=0.2]
    image_datagen = ImageDataGenerator[**data_gen_args]
    mask_datagen = ImageDataGenerator[**data_gen_args]
    # Provide the same seed and keyword arguments to the fit and flow methods
    seed = 1
    image_datagen.fit[images, augment=True, seed=seed]
    mask_datagen.fit[masks, augment=True, seed=seed]
    image_generator = image_datagen.flow_from_directory[
        'data/images',
        class_mode=None,
        seed=seed]
    mask_generator = mask_datagen.flow_from_directory[
        'data/masks',
        class_mode=None,
        seed=seed]
    # combine generators into one which yields image and masks
    train_generator = zip[image_generator, mask_generator]
    model.fit[
        train_generator,
        steps_per_epoch=2000,
        epochs=50]
    
    0epsilon để làm trắng ZCA. Mặc định là 1e-6.
    # we create two instances with the same arguments
    data_gen_args = dict[featurewise_center=True,
                         featurewise_std_normalization=True,
                         rotation_range=90,
                         width_shift_range=0.1,
                         height_shift_range=0.1,
                         zoom_range=0.2]
    image_datagen = ImageDataGenerator[**data_gen_args]
    mask_datagen = ImageDataGenerator[**data_gen_args]
    # Provide the same seed and keyword arguments to the fit and flow methods
    seed = 1
    image_datagen.fit[images, augment=True, seed=seed]
    mask_datagen.fit[masks, augment=True, seed=seed]
    image_generator = image_datagen.flow_from_directory[
        'data/images',
        class_mode=None,
        seed=seed]
    mask_generator = mask_datagen.flow_from_directory[
        'data/masks',
        class_mode=None,
        seed=seed]
    # combine generators into one which yields image and masks
    train_generator = zip[image_generator, mask_generator]
    model.fit[
        train_generator,
        steps_per_epoch=2000,
        epochs=50]
    
    1Boolean. Thoa ZCA làm trắng.
    # we create two instances with the same arguments
    data_gen_args = dict[featurewise_center=True,
                         featurewise_std_normalization=True,
                         rotation_range=90,
                         width_shift_range=0.1,
                         height_shift_range=0.1,
                         zoom_range=0.2]
    image_datagen = ImageDataGenerator[**data_gen_args]
    mask_datagen = ImageDataGenerator[**data_gen_args]
    # Provide the same seed and keyword arguments to the fit and flow methods
    seed = 1
    image_datagen.fit[images, augment=True, seed=seed]
    mask_datagen.fit[masks, augment=True, seed=seed]
    image_generator = image_datagen.flow_from_directory[
        'data/images',
        class_mode=None,
        seed=seed]
    mask_generator = mask_datagen.flow_from_directory[
        'data/masks',
        class_mode=None,
        seed=seed]
    # combine generators into one which yields image and masks
    train_generator = zip[image_generator, mask_generator]
    model.fit[
        train_generator,
        steps_per_epoch=2000,
        epochs=50]
    
    2Int. Phạm vi độ cho phép quay ngẫu nhiên.
    # we create two instances with the same arguments
    data_gen_args = dict[featurewise_center=True,
                         featurewise_std_normalization=True,
                         rotation_range=90,
                         width_shift_range=0.1,
                         height_shift_range=0.1,
                         zoom_range=0.2]
    image_datagen = ImageDataGenerator[**data_gen_args]
    mask_datagen = ImageDataGenerator[**data_gen_args]
    # Provide the same seed and keyword arguments to the fit and flow methods
    seed = 1
    image_datagen.fit[images, augment=True, seed=seed]
    mask_datagen.fit[masks, augment=True, seed=seed]
    image_generator = image_datagen.flow_from_directory[
        'data/images',
        class_mode=None,
        seed=seed]
    mask_generator = mask_datagen.flow_from_directory[
        'data/masks',
        class_mode=None,
        seed=seed]
    # combine generators into one which yields image and masks
    train_generator = zip[image_generator, mask_generator]
    model.fit[
        train_generator,
        steps_per_epoch=2000,
        epochs=50]
    
    3Float, dạng mảng 1-D hoặc int
    # we create two instances with the same arguments
    data_gen_args = dict[featurewise_center=True,
                         featurewise_std_normalization=True,
                         rotation_range=90,
                         width_shift_range=0.1,
                         height_shift_range=0.1,
                         zoom_range=0.2]
    image_datagen = ImageDataGenerator[**data_gen_args]
    mask_datagen = ImageDataGenerator[**data_gen_args]
    # Provide the same seed and keyword arguments to the fit and flow methods
    seed = 1
    image_datagen.fit[images, augment=True, seed=seed]
    mask_datagen.fit[masks, augment=True, seed=seed]
    image_generator = image_datagen.flow_from_directory[
        'data/images',
        class_mode=None,
        seed=seed]
    mask_generator = mask_datagen.flow_from_directory[
        'data/masks',
        class_mode=None,
        seed=seed]
    # combine generators into one which yields image and masks
    train_generator = zip[image_generator, mask_generator]
    model.fit[
        train_generator,
        steps_per_epoch=2000,
        epochs=50]
    
    4Float, dạng mảng 1-D hoặc int
    # we create two instances with the same arguments
    data_gen_args = dict[featurewise_center=True,
                         featurewise_std_normalization=True,
                         rotation_range=90,
                         width_shift_range=0.1,
                         height_shift_range=0.1,
                         zoom_range=0.2]
    image_datagen = ImageDataGenerator[**data_gen_args]
    mask_datagen = ImageDataGenerator[**data_gen_args]
    # Provide the same seed and keyword arguments to the fit and flow methods
    seed = 1
    image_datagen.fit[images, augment=True, seed=seed]
    mask_datagen.fit[masks, augment=True, seed=seed]
    image_generator = image_datagen.flow_from_directory[
        'data/images',
        class_mode=None,
        seed=seed]
    mask_generator = mask_datagen.flow_from_directory[
        'data/masks',
        class_mode=None,
        seed=seed]
    # combine generators into one which yields image and masks
    train_generator = zip[image_generator, mask_generator]
    model.fit[
        train_generator,
        steps_per_epoch=2000,
        epochs=50]
    
    5Tuple hoặc danh sách hai số float. Phạm vi để chọn giá trị thay đổi độ sáng từ.
    # we create two instances with the same arguments
    data_gen_args = dict[featurewise_center=True,
                         featurewise_std_normalization=True,
                         rotation_range=90,
                         width_shift_range=0.1,
                         height_shift_range=0.1,
                         zoom_range=0.2]
    image_datagen = ImageDataGenerator[**data_gen_args]
    mask_datagen = ImageDataGenerator[**data_gen_args]
    # Provide the same seed and keyword arguments to the fit and flow methods
    seed = 1
    image_datagen.fit[images, augment=True, seed=seed]
    mask_datagen.fit[masks, augment=True, seed=seed]
    image_generator = image_datagen.flow_from_directory[
        'data/images',
        class_mode=None,
        seed=seed]
    mask_generator = mask_datagen.flow_from_directory[
        'data/masks',
        class_mode=None,
        seed=seed]
    # combine generators into one which yields image and masks
    train_generator = zip[image_generator, mask_generator]
    model.fit[
        train_generator,
        steps_per_epoch=2000,
        epochs=50]
    
    6Nổi. Cường độ cắt [Góc cắt theo hướng ngược chiều kim đồng hồ tính theo độ]
    # we create two instances with the same arguments
    data_gen_args = dict[featurewise_center=True,
                         featurewise_std_normalization=True,
                         rotation_range=90,
                         width_shift_range=0.1,
                         height_shift_range=0.1,
                         zoom_range=0.2]
    image_datagen = ImageDataGenerator[**data_gen_args]
    mask_datagen = ImageDataGenerator[**data_gen_args]
    # Provide the same seed and keyword arguments to the fit and flow methods
    seed = 1
    image_datagen.fit[images, augment=True, seed=seed]
    mask_datagen.fit[masks, augment=True, seed=seed]
    image_generator = image_datagen.flow_from_directory[
        'data/images',
        class_mode=None,
        seed=seed]
    mask_generator = mask_datagen.flow_from_directory[
        'data/masks',
        class_mode=None,
        seed=seed]
    # combine generators into one which yields image and masks
    train_generator = zip[image_generator, mask_generator]
    model.fit[
        train_generator,
        steps_per_epoch=2000,
        epochs=50]
    
    7Nổi hoặc [dưới, trên]. Phạm vi thu phóng ngẫu nhiên. Nếu một chiếc phao,
    # we create two instances with the same arguments
    data_gen_args = dict[featurewise_center=True,
                         featurewise_std_normalization=True,
                         rotation_range=90,
                         width_shift_range=0.1,
                         height_shift_range=0.1,
                         zoom_range=0.2]
    image_datagen = ImageDataGenerator[**data_gen_args]
    mask_datagen = ImageDataGenerator[**data_gen_args]
    # Provide the same seed and keyword arguments to the fit and flow methods
    seed = 1
    image_datagen.fit[images, augment=True, seed=seed]
    mask_datagen.fit[masks, augment=True, seed=seed]
    image_generator = image_datagen.flow_from_directory[
        'data/images',
        class_mode=None,
        seed=seed]
    mask_generator = mask_datagen.flow_from_directory[
        'data/masks',
        class_mode=None,
        seed=seed]
    # combine generators into one which yields image and masks
    train_generator = zip[image_generator, mask_generator]
    model.fit[
        train_generator,
        steps_per_epoch=2000,
        epochs=50]
    
    8.
    # we create two instances with the same arguments
    data_gen_args = dict[featurewise_center=True,
                         featurewise_std_normalization=True,
                         rotation_range=90,
                         width_shift_range=0.1,
                         height_shift_range=0.1,
                         zoom_range=0.2]
    image_datagen = ImageDataGenerator[**data_gen_args]
    mask_datagen = ImageDataGenerator[**data_gen_args]
    # Provide the same seed and keyword arguments to the fit and flow methods
    seed = 1
    image_datagen.fit[images, augment=True, seed=seed]
    mask_datagen.fit[masks, augment=True, seed=seed]
    image_generator = image_datagen.flow_from_directory[
        'data/images',
        class_mode=None,
        seed=seed]
    mask_generator = mask_datagen.flow_from_directory[
        'data/masks',
        class_mode=None,
        seed=seed]
    # combine generators into one which yields image and masks
    train_generator = zip[image_generator, mask_generator]
    model.fit[
        train_generator,
        steps_per_epoch=2000,
        epochs=50]
    
    9Nổi. Phạm vi thay đổi kênh ngẫu nhiên.
    apply_transform[
        x, transform_parameters
    ]
    
    0Một trong {"hằng số", "gần nhất", "phản chiếu" hoặc "quấn"}. Mặc định là 'gần nhất'. Các điểm bên ngoài ranh giới của đầu vào được điền theo chế độ đã cho
    • 'hằng số'. kkkkkkk. A B C D. kkkkkkkkk [cval=k]
    • 'gần nhất'. aaaaaaaa. A B C D. dddddddd
    • 'phản chiếu'. abcdcba. A B C D. dcbaabcd
    • 'bọc'. abcdabcd. A B C D. abcdabcd
    apply_transform[
        x, transform_parameters
    ]
    
    1Float hoặc Int. Giá trị được sử dụng cho các điểm bên ngoài ranh giới khi
    apply_transform[
        x, transform_parameters
    ]
    
    2.
    apply_transform[
        x, transform_parameters
    ]
    
    3Boolean. Lật ngẫu nhiên đầu vào theo chiều ngang.
    apply_transform[
        x, transform_parameters
    ]
    
    4Boolean. Lật ngẫu nhiên đầu vào theo chiều dọc.
    apply_transform[
        x, transform_parameters
    ]
    
    5yếu tố thay đổi tỷ lệ. Mặc định là Không có. Nếu Không có hoặc 0, thì không áp dụng thay đổi tỷ lệ, nếu không, chúng tôi nhân dữ liệu với giá trị được cung cấp [sau khi áp dụng tất cả các phép biến đổi khác].
    apply_transform[
        x, transform_parameters
    ]
    
    6chức năng sẽ được áp dụng trên mỗi đầu vào. Chức năng sẽ chạy sau khi hình ảnh được thay đổi kích thước và tăng cường. Hàm sẽ nhận một đối số. một hình ảnh [tenxơ Numpy có hạng 3] và sẽ xuất ra một tenxơ Numpy có cùng hình dạng.
    apply_transform[
        x, transform_parameters
    ]
    
    7Định dạng dữ liệu hình ảnh, "channels_first" hoặc "channels_last". Chế độ "channels_last" có nghĩa là hình ảnh phải có hình dạng
    apply_transform[
        x, transform_parameters
    ]
    
    8, chế độ "channels_first" có nghĩa là hình ảnh phải có hình dạng
    apply_transform[
        x, transform_parameters
    ]
    
    9. Nó mặc định là giá trị
    fit[
        x, augment=False, rounds=1, seed=None
    ]
    
    0 được tìm thấy trong tệp cấu hình Keras của bạn tại
    fit[
        x, augment=False, rounds=1, seed=None
    ]
    
    1. Nếu bạn chưa bao giờ đặt nó, thì nó sẽ là "channels_last".
    fit[
        x, augment=False, rounds=1, seed=None
    ]
    
    2Nổi. Phần hình ảnh được dành riêng để xác thực [hoàn toàn từ 0 đến 1].
    fit[
        x, augment=False, rounds=1, seed=None
    ]
    
    3Dtype để sử dụng cho các mảng được tạo

    tăng

    fit[
        x, augment=False, rounds=1, seed=None
    ]
    
    4Nếu giá trị của đối số,
    apply_transform[
        x, transform_parameters
    ]
    
    7 khác với
    fit[
        x, augment=False, rounds=1, seed=None
    ]
    
    6 hoặc
    fit[
        x, augment=False, rounds=1, seed=None
    ]
    
    7.
    fit[
        x, augment=False, rounds=1, seed=None
    ]
    
    4Nếu giá trị của đối số,
    fit[
        x, augment=False, rounds=1, seed=None
    ]
    
    2 > 1 hoặc
    fit[
        x, augment=False, rounds=1, seed=None
    ]
    
    2 < 0

    ví dụ

    Ví dụ về việc sử dụng

    flow[
        x,
        y=None,
        batch_size=32,
        shuffle=True,
        sample_weight=None,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        ignore_class_split=False,
        subset=None
    ]
    
    1

    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    

    Ví dụ về việc sử dụng

    flow[
        x,
        y=None,
        batch_size=32,
        shuffle=True,
        sample_weight=None,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        ignore_class_split=False,
        subset=None
    ]
    
    2

    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    

    Ví dụ về chuyển đổi hình ảnh và mặt nạ cùng nhau

    # we create two instances with the same arguments
    data_gen_args = dict[featurewise_center=True,
                         featurewise_std_normalization=True,
                         rotation_range=90,
                         width_shift_range=0.1,
                         height_shift_range=0.1,
                         zoom_range=0.2]
    image_datagen = ImageDataGenerator[**data_gen_args]
    mask_datagen = ImageDataGenerator[**data_gen_args]
    # Provide the same seed and keyword arguments to the fit and flow methods
    seed = 1
    image_datagen.fit[images, augment=True, seed=seed]
    mask_datagen.fit[masks, augment=True, seed=seed]
    image_generator = image_datagen.flow_from_directory[
        'data/images',
        class_mode=None,
        seed=seed]
    mask_generator = mask_datagen.flow_from_directory[
        'data/masks',
        class_mode=None,
        seed=seed]
    # combine generators into one which yields image and masks
    train_generator = zip[image_generator, mask_generator]
    model.fit[
        train_generator,
        steps_per_epoch=2000,
        epochs=50]
    

    phương pháp

    flow[
        x,
        y=None,
        batch_size=32,
        shuffle=True,
        sample_weight=None,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        ignore_class_split=False,
        subset=None
    ]
    
    3

    apply_transform[
        x, transform_parameters
    ]
    

    Áp dụng phép biến đổi cho hình ảnh theo các tham số đã cho

    • flow[
          x,
          y=None,
          batch_size=32,
          shuffle=True,
          sample_weight=None,
          seed=None,
          save_to_dir=None,
          save_prefix='',
          save_format='png',
          ignore_class_split=False,
          subset=None
      ]
      
      4. Trôi nổi. Góc quay tính bằng độ
    • flow[
          x,
          y=None,
          batch_size=32,
          shuffle=True,
          sample_weight=None,
          seed=None,
          save_to_dir=None,
          save_prefix='',
          save_format='png',
          ignore_class_split=False,
          subset=None
      ]
      
      5. Trôi nổi. Dịch chuyển theo hướng x
    • flow[
          x,
          y=None,
          batch_size=32,
          shuffle=True,
          sample_weight=None,
          seed=None,
          save_to_dir=None,
          save_prefix='',
          save_format='png',
          ignore_class_split=False,
          subset=None
      ]
      
      6. Trôi nổi. Dịch chuyển theo hướng y
    • flow[
          x,
          y=None,
          batch_size=32,
          shuffle=True,
          sample_weight=None,
          seed=None,
          save_to_dir=None,
          save_prefix='',
          save_format='png',
          ignore_class_split=False,
          subset=None
      ]
      
      7. Trôi nổi. Góc cắt theo độ
    • flow[
          x,
          y=None,
          batch_size=32,
          shuffle=True,
          sample_weight=None,
          seed=None,
          save_to_dir=None,
          save_prefix='',
          save_format='png',
          ignore_class_split=False,
          subset=None
      ]
      
      8. Trôi nổi. Phóng to theo hướng x
    • flow[
          x,
          y=None,
          batch_size=32,
          shuffle=True,
          sample_weight=None,
          seed=None,
          save_to_dir=None,
          save_prefix='',
          save_format='png',
          ignore_class_split=False,
          subset=None
      ]
      
      9. Trôi nổi. Phóng to theo hướng y
    • flow_from_dataframe[
          dataframe,
          directory=None,
          x_col='filename',
          y_col='class',
          weight_col=None,
          target_size=[256, 256],
          color_mode='rgb',
          classes=None,
          class_mode='categorical',
          batch_size=32,
          shuffle=True,
          seed=None,
          save_to_dir=None,
          save_prefix='',
          save_format='png',
          subset=None,
          interpolation='nearest',
          validate_filenames=True,
          **kwargs
      ]
      
      0. Boolean. lật ngang
    • flow_from_dataframe[
          dataframe,
          directory=None,
          x_col='filename',
          y_col='class',
          weight_col=None,
          target_size=[256, 256],
          color_mode='rgb',
          classes=None,
          class_mode='categorical',
          batch_size=32,
          shuffle=True,
          seed=None,
          save_to_dir=None,
          save_prefix='',
          save_format='png',
          subset=None,
          interpolation='nearest',
          validate_filenames=True,
          **kwargs
      ]
      
      1. Boolean. lật dọc
    • flow_from_dataframe[
          dataframe,
          directory=None,
          x_col='filename',
          y_col='class',
          weight_col=None,
          target_size=[256, 256],
          color_mode='rgb',
          classes=None,
          class_mode='categorical',
          batch_size=32,
          shuffle=True,
          seed=None,
          save_to_dir=None,
          save_prefix='',
          save_format='png',
          subset=None,
          interpolation='nearest',
          validate_filenames=True,
          **kwargs
      ]
      
      2. Trôi nổi. Cường độ chuyển kênh
    • flow_from_dataframe[
          dataframe,
          directory=None,
          x_col='filename',
          y_col='class',
          weight_col=None,
          target_size=[256, 256],
          color_mode='rgb',
          classes=None,
          class_mode='categorical',
          batch_size=32,
          shuffle=True,
          seed=None,
          save_to_dir=None,
          save_prefix='',
          save_format='png',
          subset=None,
          interpolation='nearest',
          validate_filenames=True,
          **kwargs
      ]
      
      3. Trôi nổi. cường độ thay đổi độ sáng
    Tenxơ Args____61_______43D, hình ảnh đơn.
    flow_from_dataframe[
        dataframe,
        directory=None,
        x_col='filename',
        y_col='class',
        weight_col=None,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        subset=None,
        interpolation='nearest',
        validate_filenames=True,
        **kwargs
    ]
    
    5Từ điển với chuỗi - cặp tham số mô tả phép biến đổi. Hiện tại, các thông số sau từ từ điển được sử dụng. ReturnsMột phiên bản đã chuyển đổi của đầu vào [cùng hình dạng]

    flow_from_dataframe[
        dataframe,
        directory=None,
        x_col='filename',
        y_col='class',
        weight_col=None,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        subset=None,
        interpolation='nearest',
        validate_filenames=True,
        **kwargs
    ]
    
    6

    fit[
        x, augment=False, rounds=1, seed=None
    ]
    

    Khớp trình tạo dữ liệu với một số dữ liệu mẫu

    Điều này tính toán các số liệu thống kê dữ liệu nội bộ liên quan đến các phép biến đổi phụ thuộc vào dữ liệu, dựa trên một mảng dữ liệu mẫu

    Chỉ bắt buộc nếu

    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    6 hoặc
    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    8 hoặc
    # we create two instances with the same arguments
    data_gen_args = dict[featurewise_center=True,
                         featurewise_std_normalization=True,
                         rotation_range=90,
                         width_shift_range=0.1,
                         height_shift_range=0.1,
                         zoom_range=0.2]
    image_datagen = ImageDataGenerator[**data_gen_args]
    mask_datagen = ImageDataGenerator[**data_gen_args]
    # Provide the same seed and keyword arguments to the fit and flow methods
    seed = 1
    image_datagen.fit[images, augment=True, seed=seed]
    mask_datagen.fit[masks, augment=True, seed=seed]
    image_generator = image_datagen.flow_from_directory[
        'data/images',
        class_mode=None,
        seed=seed]
    mask_generator = mask_datagen.flow_from_directory[
        'data/masks',
        class_mode=None,
        seed=seed]
    # combine generators into one which yields image and masks
    train_generator = zip[image_generator, mask_generator]
    model.fit[
        train_generator,
        steps_per_epoch=2000,
        epochs=50]
    
    1 được đặt thành True

    Khi

    apply_transform[
        x, transform_parameters
    ]
    
    5 được đặt thành một giá trị, việc thay đổi kích thước được áp dụng cho dữ liệu mẫu trước khi tính toán thống kê dữ liệu nội bộ

    Args
    flow_from_dataframe[
        dataframe,
        directory=None,
        x_col='filename',
        y_col='class',
        weight_col=None,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        subset=None,
        interpolation='nearest',
        validate_filenames=True,
        **kwargs
    ]
    
    4Dữ liệu mẫu. Nên có hạng 4. Trong trường hợp dữ liệu thang độ xám, trục kênh phải có giá trị 1, trong trường hợp dữ liệu RGB, nó phải có giá trị 3 và trong trường hợp dữ liệu RGBA, nó phải có giá trị 4.
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    2Boolean [mặc định. Sai]. Có phù hợp với các mẫu tăng ngẫu nhiên hay không.
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    3Int [mặc định. 1]. Nếu sử dụng phần mở rộng dữ liệu [
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    4], thì đây là số lần phần mở rộng chuyển qua dữ liệu để sử dụng.
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    5Int [mặc định. Không có]. Hạt giống ngẫu nhiên

    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    6

    flow[
        x,
        y=None,
        batch_size=32,
        shuffle=True,
        sample_weight=None,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        ignore_class_split=False,
        subset=None
    ]
    

    Lấy mảng dữ liệu và nhãn, tạo ra các lô dữ liệu tăng cường

    Args______61_______4Dữ liệu đầu vào. Mảng Numpy hạng 4 hoặc một bộ. Nếu là bộ, phần tử đầu tiên phải chứa hình ảnh và phần tử thứ hai là một mảng có nhiều mảng khác hoặc danh sách các mảng có nhiều mảng được chuyển đến đầu ra mà không có bất kỳ sửa đổi nào. Có thể được sử dụng để cung cấp dữ liệu linh tinh cho mô hình cùng với hình ảnh. Trong trường hợp dữ liệu thang độ xám, trục kênh của mảng hình ảnh phải có giá trị 1, trong trường hợp dữ liệu RGB, nó phải có giá trị 3 và trong trường hợp dữ liệu RGBA, nó phải có giá trị 4.
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    8Nhãn.
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    9Int [mặc định. 32].
    get_random_transform[
        img_shape, seed=None
    ]
    
    0Boolean [mặc định. Thật].
    get_random_transform[
        img_shape, seed=None
    ]
    
    1Trọng lượng mẫu.
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    5Int [mặc định. Không có].
    get_random_transform[
        img_shape, seed=None
    ]
    
    3Không có hoặc str [mặc định. Không có]. Điều này cho phép bạn tùy chọn chỉ định một thư mục để lưu các hình ảnh tăng cường được tạo [hữu ích để hình dung những gì bạn đang làm].
    get_random_transform[
        img_shape, seed=None
    ]
    
    4Str [mặc định.
    get_random_transform[
        img_shape, seed=None
    ]
    
    5]. Tiền tố để sử dụng cho tên tệp của ảnh đã lưu [chỉ liên quan nếu đặt ____83_______3].
    get_random_transform[
        img_shape, seed=None
    ]
    
    7một trong số "png", "jpeg", "bmp", "pdf", "ppm", "gif", "tif", "jpg" [chỉ liên quan nếu đặt
    get_random_transform[
        img_shape, seed=None
    ]
    
    3]. Vỡ nợ. "png".
    get_random_transform[
        img_shape, seed=None
    ]
    
    9Boolean [mặc định. Sai], bỏ qua sự khác biệt về số lượng lớp trong nhãn giữa quá trình đào tạo và phân tách xác thực [hữu ích cho các tác vụ không phân loại]
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    00Tập hợp con dữ liệu [
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    01 hoặc ________0____02] nếu
    fit[
        x, augment=False, rounds=1, seed=None
    ]
    
    2 được đặt trong
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    04. ReturnsAn
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    05 mang lại các bộ dữ liệu của
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    06 trong đó
    flow_from_dataframe[
        dataframe,
        directory=None,
        x_col='filename',
        y_col='class',
        weight_col=None,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        subset=None,
        interpolation='nearest',
        validate_filenames=True,
        **kwargs
    ]
    
    4 là một mảng dữ liệu hình ảnh khó hiểu [trong trường hợp đầu vào là một hình ảnh] hoặc danh sách các mảng khó hiểu [trong trường hợp có thêm đầu vào] và
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    8 là một mảng khó hiểu của các nhãn tương ứng. Nếu 'sample_weight' không phải là Không có, các bộ dữ liệu được tạo ra có dạng
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    09. Nếu
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    8 là Không có, thì chỉ có mảng numpy
    flow_from_dataframe[
        dataframe,
        directory=None,
        x_col='filename',
        y_col='class',
        weight_col=None,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        subset=None,
        interpolation='nearest',
        validate_filenames=True,
        **kwargs
    ]
    
    4 được trả về. Tăng_______37_______4Nếu Giá trị của đối số,
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    00 không phải là "đào tạo" hoặc "xác thực"

    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    14

    flow_from_dataframe[
        dataframe,
        directory=None,
        x_col='filename',
        y_col='class',
        weight_col=None,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        subset=None,
        interpolation='nearest',
        validate_filenames=True,
        **kwargs
    ]
    

    Lấy khung dữ liệu và đường dẫn đến một thư mục + tạo các đợt

    Các lô được tạo chứa dữ liệu tăng cường/chuẩn hóa

    **Có thể tìm thấy hướng dẫn đơn giản **tại đây

    • nếu
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      15 là
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      16 [giá trị mặc định] thì phải bao gồm cột
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      17 với loại/các loại hình ảnh. Các giá trị trong cột có thể là chuỗi/danh sách/bộ nếu một lớp hoặc danh sách/bộ nếu nhiều lớp
    • nếu
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      15 là
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      19 hoặc
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      20 thì nó phải bao gồm cột
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      17 đã cho với các giá trị lớp dưới dạng chuỗi
    • nếu
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      15 là
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      23 hoặc
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      24 thì nó phải chứa các cột được chỉ định trong
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      17
    • nếu
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      15 là
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      27 hoặc
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      28 thì không cần thêm cột
  • [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    19. Mảng 1D numpy của nhãn nhị phân,
  • [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    16. Mảng 2D gọn gàng của các nhãn được mã hóa một chiều. Hỗ trợ đầu ra đa nhãn
  • [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    27. hình ảnh giống với hình ảnh đầu vào [chủ yếu được sử dụng để hoạt động với bộ mã hóa tự động],
  • [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    24. danh sách với các giá trị của các cột khác nhau,
  • [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    23. mảng numpy gồm các giá trị trong [các] cột
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    17,
  • [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    20. Mảng numpy 1D gồm các nhãn số nguyên,
  • [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    28, không có mục tiêu nào được trả về [trình tạo sẽ chỉ tạo ra các lô dữ liệu hình ảnh, hữu ích để sử dụng trong
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    37]
  • Args_______0_______38Pandas dataframe chứa các filepath liên quan đến
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    39 [hoặc đường dẫn tuyệt đối nếu
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    39 là Không có] của các hình ảnh trong một cột chuỗi. Nó nên bao gồm các cột khác tùy thuộc vào
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    15.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    39string, đường dẫn đến thư mục đọc ảnh từ. Nếu
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    28, dữ liệu trong cột
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    44 phải là đường dẫn tuyệt đối. Chuỗi
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    44, cột trong
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    38 chứa tên tệp [hoặc đường dẫn tuyệt đối nếu
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    39 là
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    28].
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    17chuỗi hoặc danh sách,/các cột trong
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    38 có dữ liệu đích.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    51chuỗi, cột trong
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    38 chứa trọng lượng mẫu. Vỡ nợ.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    28.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    54bộ số nguyên
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    55, mặc định.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    56. Kích thước mà tất cả các hình ảnh được tìm thấy sẽ được thay đổi kích thước.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    57một trong "thang độ xám", "rgb", "rgba". Vỡ nợ. "rgb". Liệu hình ảnh sẽ được chuyển đổi để có 1 hoặc 3 kênh màu.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    58danh sách các lớp tùy chọn [e. g.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    59]. Mặc định là Không có. Nếu không được cung cấp, danh sách các lớp sẽ được tự động suy ra từ
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    17, sẽ ánh xạ tới các chỉ số nhãn, sẽ là chữ và số]. Từ điển chứa ánh xạ từ tên lớp đến chỉ mục lớp có thể được lấy thông qua thuộc tính
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    61.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    15một trong số "nhị phân", "phân loại", "đầu vào", "đa_đầu ra", "thô", thưa thớt" hoặc Không có. Vỡ nợ. "phân loại". Chế độ mang lại các mục tiêu.
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    9kích thước của lô dữ liệu [mặc định. 32].
    get_random_transform[
        img_shape, seed=None
    ]
    
    0có xáo trộn dữ liệu hay không [mặc định. Đúng]
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    5hạt ngẫu nhiên tùy chọn để xáo trộn và biến đổi.
    get_random_transform[
        img_shape, seed=None
    ]
    
    3Không có hoặc str [mặc định. Không có]. Điều này cho phép bạn tùy chọn chỉ định một thư mục để lưu các hình ảnh tăng cường được tạo [hữu ích để hình dung những gì bạn đang làm].
    get_random_transform[
        img_shape, seed=None
    ]
    
    4str. Tiền tố để sử dụng cho tên tệp của ảnh đã lưu [chỉ liên quan nếu đặt ____83_______3].
    get_random_transform[
        img_shape, seed=None
    ]
    
    7một trong số "png", "jpeg", "bmp", "pdf", "ppm", "gif", "tif", "jpg" [chỉ liên quan nếu đặt
    get_random_transform[
        img_shape, seed=None
    ]
    
    3]. Vỡ nợ. "png".
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    00Tập hợp con của dữ liệu [_______0_______01 hoặc
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    02] nếu
    fit[
        x, augment=False, rounds=1, seed=None
    ]
    
    2 được đặt trong
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    04.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    76Phương pháp nội suy được sử dụng để lấy mẫu lại hình ảnh nếu kích thước mục tiêu khác với kích thước của hình ảnh được tải. Các phương thức được hỗ trợ là
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    77,
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    78 và
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    79. Nếu PIL phiên bản 1. 1. 3 hoặc mới hơn được cài đặt,
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    80 cũng được hỗ trợ. Nếu PIL phiên bản 3. 4. 0 hoặc mới hơn được cài đặt,
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    81 và
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    82 cũng được hỗ trợ. Theo mặc định,
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    77 được sử dụng.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    84Boolean, có nên xác thực tên tệp hình ảnh trong
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    44. Nếu
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    86, ảnh không hợp lệ sẽ bị bỏ qua. Vô hiệu hóa tùy chọn này có thể dẫn đến tăng tốc khi thực hiện chức năng này. Mặc định là
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    86.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    88 lập luận kế thừa để đưa ra cảnh báo không dùng nữa. ReturnsA
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    89 mang lại các bộ dữ liệu của
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    06 trong đó
    flow_from_dataframe[
        dataframe,
        directory=None,
        x_col='filename',
        y_col='class',
        weight_col=None,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        subset=None,
        interpolation='nearest',
        validate_filenames=True,
        **kwargs
    ]
    
    4 là một mảng khó hiểu chứa một loạt hình ảnh có hình dạng
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    92 và
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    8 là một mảng khó hiểu của các nhãn tương ứng

    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    94

    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    

    Đưa đường dẫn đến một thư mục và tạo ra các lô dữ liệu tăng cường

    • "phân loại" sẽ là nhãn được mã hóa một chiều 2D,
    • "nhị phân" sẽ là nhãn nhị phân 1D, "thưa thớt" sẽ là nhãn số nguyên 1D,
    • "đầu vào" sẽ là hình ảnh giống với hình ảnh đầu vào [chủ yếu được sử dụng để hoạt động với bộ mã hóa tự động]
    • Nếu Không, không có nhãn nào được trả về [trình tạo sẽ chỉ tạo ra các lô dữ liệu hình ảnh, rất hữu ích khi sử dụng với
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      95]. Xin lưu ý rằng trong trường hợp class_mode Không có, dữ liệu vẫn cần nằm trong thư mục con của
      [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
      y_train = utils.to_categorical[y_train, num_classes]
      y_test = utils.to_categorical[y_test, num_classes]
      datagen = ImageDataGenerator[
          featurewise_center=True,
          featurewise_std_normalization=True,
          rotation_range=20,
          width_shift_range=0.2,
          height_shift_range=0.2,
          horizontal_flip=True,
          validation_split=0.2]
      # compute quantities required for featurewise normalization
      # [std, mean, and principal components if ZCA whitening is applied]
      datagen.fit[x_train]
      # fits the model on batches with real-time data augmentation:
      model.fit[datagen.flow[x_train, y_train, batch_size=32,
               subset='training'],
               validation_data=datagen.flow[x_train, y_train,
               batch_size=8, subset='validation'],
               steps_per_epoch=len[x_train] / 32, epochs=epochs]
      # here's a more "manual" example
      for e in range[epochs]:
          print['Epoch', e]
          batches = 0
          for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
              model.fit[x_batch, y_batch]
              batches += 1
              if batches >= len[x_train] / 32:
                  # we need to break the loop by hand because
                  # the generator loops indefinitely
                  break
      
      39 để nó hoạt động bình thường
    Args
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    39string, đường dẫn đến thư mục đích. Nó nên chứa một thư mục con cho mỗi lớp. Mọi hình ảnh PNG, JPG, BMP, PPM hoặc TIF bên trong mỗi cây thư mục thư mục con sẽ được đưa vào trình tạo. Xem tập lệnh này để biết thêm chi tiết.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    54Bộ số nguyên
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    55, mặc định là
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    56. Kích thước mà tất cả các hình ảnh được tìm thấy sẽ được thay đổi kích thước.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    57Một trong số "thang độ xám", "rgb", "rgba". Vỡ nợ. "rgb". Hình ảnh sẽ được chuyển đổi thành 1, 3 hay 4 kênh.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    58Danh sách thư mục con của lớp tùy chọn [e. g.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    59]. Vỡ nợ. Không có. Nếu không được cung cấp, danh sách các lớp sẽ được tự động suy ra từ tên/cấu trúc thư mục con trong
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    39, trong đó mỗi thư mục con sẽ được coi là một lớp khác nhau [và thứ tự của các lớp, sẽ ánh xạ tới các chỉ số nhãn, sẽ là chữ và số . Từ điển chứa ánh xạ từ tên lớp đến chỉ mục lớp có thể được lấy thông qua thuộc tính
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    61.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    15Một trong số "phân loại", "nhị phân", "thưa thớt", "đầu vào" hoặc Không có. Vỡ nợ. "phân loại". Xác định loại mảng nhãn được trả về.
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    9Kích thước lô dữ liệu [mặc định. 32].
    get_random_transform[
        img_shape, seed=None
    ]
    
    0Có xáo trộn dữ liệu hay không [mặc định. Đúng] Nếu được đặt thành Sai, hãy sắp xếp dữ liệu theo thứ tự chữ và số.
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    5Hạt giống ngẫu nhiên tùy chọn để xáo trộn và biến đổi.
    get_random_transform[
        img_shape, seed=None
    ]
    
    3Không có hoặc str [mặc định. Không có]. Điều này cho phép bạn tùy chọn chỉ định một thư mục để lưu các hình ảnh tăng cường được tạo [hữu ích để hình dung những gì bạn đang làm].
    get_random_transform[
        img_shape, seed=None
    ]
    
    4Str. Tiền tố để sử dụng cho tên tệp của ảnh đã lưu [chỉ liên quan nếu đặt ____83_______3].
    get_random_transform[
        img_shape, seed=None
    ]
    
    7một trong số "png", "jpeg", "bmp", "pdf", "ppm", "gif", "tif", "jpg" [chỉ liên quan nếu đặt
    get_random_transform[
        img_shape, seed=None
    ]
    
    3]. Vỡ nợ. "png".
    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    15Có nên theo các liên kết tượng trưng bên trong các thư mục con của lớp hay không [mặc định. Sai].
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    00Tập hợp con của dữ liệu [_______0_______01 hoặc
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    02] nếu
    fit[
        x, augment=False, rounds=1, seed=None
    ]
    
    2 được đặt trong
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    04.
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    76Phương pháp nội suy được sử dụng để lấy mẫu lại hình ảnh nếu kích thước mục tiêu khác với kích thước của hình ảnh được tải. Các phương thức được hỗ trợ là
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    77,
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    78 và
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    79. Nếu PIL phiên bản 1. 1. 3 hoặc mới hơn được cài đặt,
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    80 cũng được hỗ trợ. Nếu PIL phiên bản 3. 4. 0 hoặc mới hơn được cài đặt,
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    81 và
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    82 cũng được hỗ trợ. Theo mặc định,
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    77 được sử dụng.
    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    29Boolean, có nên thay đổi kích thước hình ảnh thành kích thước mục tiêu mà không bị biến dạng tỷ lệ khung hình hay không. Hình ảnh được cắt ở giữa với tỷ lệ khung hình mục tiêu trước khi thay đổi kích thước. ReturnsA
    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    30 mang lại các bộ dữ liệu của
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    06 trong đó
    flow_from_dataframe[
        dataframe,
        directory=None,
        x_col='filename',
        y_col='class',
        weight_col=None,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        subset=None,
        interpolation='nearest',
        validate_filenames=True,
        **kwargs
    ]
    
    4 là một mảng khó hiểu chứa một loạt hình ảnh có hình dạng
    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    92 và
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    8 là một mảng khó hiểu của các nhãn tương ứng

    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    35

    get_random_transform[
        img_shape, seed=None
    ]
    

    Tạo các tham số ngẫu nhiên cho một chuyển đổi

    Args______7_______36Tuple số nguyên. Hình dạng của hình ảnh được biến đổi.
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    5Hạt giống ngẫu nhiên. ReturnsMột từ điển chứa các tham số được chọn ngẫu nhiên mô tả quá trình chuyển đổi

    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    38

    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    0

    Áp dụng phép biến đổi ngẫu nhiên cho hình ảnh

    Tenxơ Args____61_______43D, hình ảnh đơn.
    flow_from_directory[
        directory,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        follow_links=False,
        subset=None,
        interpolation='nearest',
        keep_aspect_ratio=False
    ]
    
    5Hạt giống ngẫu nhiên. ReturnsMột phiên bản được biến đổi ngẫu nhiên của đầu vào [cùng hình dạng]

    train_datagen = ImageDataGenerator[
            rescale=1./255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True]
    test_datagen = ImageDataGenerator[rescale=1./255]
    train_generator = train_datagen.flow_from_directory[
            'data/train',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    validation_generator = test_datagen.flow_from_directory[
            'data/validation',
            target_size=[150, 150],
            batch_size=32,
            class_mode='binary']
    model.fit[
            train_generator,
            steps_per_epoch=2000,
            epochs=50,
            validation_data=validation_generator,
            validation_steps=800]
    
    41

    [x_train, y_train], [x_test, y_test] = cifar10.load_data[]
    y_train = utils.to_categorical[y_train, num_classes]
    y_test = utils.to_categorical[y_test, num_classes]
    datagen = ImageDataGenerator[
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True,
        validation_split=0.2]
    # compute quantities required for featurewise normalization
    # [std, mean, and principal components if ZCA whitening is applied]
    datagen.fit[x_train]
    # fits the model on batches with real-time data augmentation:
    model.fit[datagen.flow[x_train, y_train, batch_size=32,
             subset='training'],
             validation_data=datagen.flow[x_train, y_train,
             batch_size=8, subset='validation'],
             steps_per_epoch=len[x_train] / 32, epochs=epochs]
    # here's a more "manual" example
    for e in range[epochs]:
        print['Epoch', e]
        batches = 0
        for x_batch, y_batch in datagen.flow[x_train, y_train, batch_size=32]:
            model.fit[x_batch, y_batch]
            batches += 1
            if batches >= len[x_train] / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break
    
    1

    Áp dụng cấu hình chuẩn hóa tại chỗ cho một lô đầu vào

    flow_from_dataframe[
        dataframe,
        directory=None,
        x_col='filename',
        y_col='class',
        weight_col=None,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        subset=None,
        interpolation='nearest',
        validate_filenames=True,
        **kwargs
    ]
    
    4 được thay đổi tại chỗ vì chức năng này chủ yếu được sử dụng nội bộ để chuẩn hóa hình ảnh và đưa chúng vào mạng của bạn. Nếu một bản sao của
    flow_from_dataframe[
        dataframe,
        directory=None,
        x_col='filename',
        y_col='class',
        weight_col=None,
        target_size=[256, 256],
        color_mode='rgb',
        classes=None,
        class_mode='categorical',
        batch_size=32,
        shuffle=True,
        seed=None,
        save_to_dir=None,
        save_prefix='',
        save_format='png',
        subset=None,
        interpolation='nearest',
        validate_filenames=True,
        **kwargs
    ]
    
    4 được tạo thay vào đó thì nó sẽ có chi phí thực hiện đáng kể. Nếu bạn muốn áp dụng phương pháp này mà không thay đổi đầu vào tại chỗ, bạn có thể gọi phương thức tạo bản sao trước

    Làm cách nào để nhập bộ tiền xử lý trong Python?

    Bắt đầu với tiền xử lý dữ liệu trong Python .
    từ Google. ổ đĩa nhập khẩu colab ổ đĩa. .
    nhập numpy dưới dạng np nhập matplotlib. pyplot dưới dạng plt nhập gấu trúc dưới dạng pd
    Bộ dữ liệu = pd. .
    print[x] # trả về một mảng các tính năng

    Python có bộ tiền xử lý không?

    #Macros dành cho toán tử logic nhập sys #Định nghĩa macro #trong python, định nghĩa hàm được sử dụng làm macro #Không có bộ tiền xử lý trong python #Khởi tạo biến a = int[raw_input["Nhập ba số. "]] b = int[raw_input["Nhập ba số. "]] c = int[raw_input["Nhập ba số. "]] nếu a > b

    Lệnh cài đặt pip trong Python là gì?

    PIP là hệ thống quản lý gói được sử dụng để cài đặt và quản lý các gói phần mềm được viết bằng Python . Nó là viết tắt của “chương trình cài đặt ưa thích” hoặc “Pip Installs Packages. ” PIP cho Python là một tiện ích để quản lý cài đặt gói PyPI từ dòng lệnh.

    Thư viện tiền xử lý là gì?

    Bộ tiền xử lý là thư viện tiền xử lý cho dữ liệu tweet được viết bằng Python . Khi xây dựng hệ thống Machine Learning dựa trên dữ liệu tweet, cần phải xử lý trước. Thư viện này giúp dễ dàng dọn dẹp, phân tích cú pháp hoặc mã hóa các tweet.

    Chủ Đề