| 2 | 1/1 | 返回列表 |
| 查看: 1809 | 回復: 1 | ||
| 【懸賞金幣】回答本帖問題,作者Alpha1024將贈送您 10 個金幣 | ||
Alpha1024新蟲 (正式寫手)
|
[求助]
為什么他報錯的時候說就一個樣本?
|
|
|
建了一個卷積神經網(wǎng)絡,輸入訓練集,有多個樣本,見訓練集,報錯以及代碼,為什么他報錯的時候說就一個樣本?問題在哪? ValueError: Training data contains 1 samples, which is not sufficient to split it into a validation and training set as specified by `validation_split=0.2`. Either provide more data, or a different value for the `validation_split` argument. import numpy as np import pandas as pd import tensorflow as tf from tensorflow.keras import layers #定義模型 def get_model(): #建立一個序貫模型 model = tf.keras.Sequential() #第一個卷積塊 model.add(layers.Conv2D(128, kernel_size=(3, 3), activation= 'relu', input_shape=(75, 75, 3))) model.add(layers.MaxPooling2D(pool_size=(3, 3), strides=(2, 2))) model.add(layers.Dropout(0.2)) #第二個卷積塊 model.add(layers.Conv2D(128, kernel_size=(3, 3), activation= 'relu')) model.add(layers.MaxPooling2D(pool_size=(2,2), strides=(2, 2))) model.add(layers.Dropout(0.2)) #第三個卷積塊 model.add(layers.Conv2D(64, kernel_size=(2, 2), activation='relu')) model.add(layers.MaxPooling2D(pool_size=(3, 3), strides=(2, 2))) model.add(layers.Dropout(0.2)) #第四個卷積塊 model.add(layers.Conv2D(64, kernel_size=(2, 2), activation= 'relu')) model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) model.add(layers.Dropout(0.2)) #將上一層的輸出特征映射轉化為一維數(shù)據(jù),以便進行全連接操作 model.add(layers.Flatten()) #第一個全連接層 model.add(layers.Dense(256)) model.add(layers.Activation('relu')) model.add(layers.Dropout(0.2)) #第二個全連接層 model.add(layers.Dense(128)) model.add(layers.Activation('relu')) model.add(layers.Dropout(0.2)) #第三個全連接層 model.add(layers.Dense(1)) model.add(layers.Activation('sigmoid')) #編譯模型 model.compile(loss= 'binary_crossentropy', optimizer=tf.keras.optimizers.Adam(0.0001), metrics=['accuracy']) #打印出模型的概況信息 model.summary() return model cnn_model = get_model() cnn_model. fit (train_x, train_y, batch_size=25, epochs=100, verbose=1, validation_split=0.2) 代碼 訓練集顯示 [array([[[110, 110, 110], [110, 110, 110], [109, 109, 109], ..., [ 0, 0, 0], [ 0, 0, 0], [ 0, 0, 0]]]), array([[[110, 110, 110], [110, 110, 110], [109, 109, 109], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]]]), array([[[165, 165, 165], [173, 173, 173], [169, 169, 169], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]]]), array([[[58, 58, 58], [52, 52, 52], [51, 51, 51], ..., [47, 47, 47], [55, 55, 55], [49, 49, 49]]]), array([[[ 74, 74, 74], [ 76, 76, 76], [ 71, 71, 71], ..., [110, 110, 110], [106, 106, 106], [108, 108, 108]]]), array([[[159, 159, 159], [118, 118, 118], [132, 132, 132], ..., [ 93, 93, 93], [ 95, 95, 95], [ 91, 91, 91]]]), array([[[165, 165, 165], [173, 173, 173], [169, 169, 169], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]]]), array([[[110, 110, 110], [110, 110, 110], [109, 109, 109], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]]]), array([[[165, 165, 165], [173, 173, 173], [169, 169, 169], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]]]), array([[[58, 58, 58], [52, 52, 52], [51, 51, 51], ..., [47, 47, 47], [55, 55, 55], [49, 49, 49]]]), array([[[ 74, 74, 74], [ 76, 76, 76], [ 71, 71, 71], ..., [110, 110, 110], [106, 106, 106], [108, 108, 108]]]), array([[[159, 159, 159], [118, 118, 118], [132, 132, 132], ..., [ 93, 93, 93], [ 95, 95, 95], [ 91, 91, 91]]]), array([[[165, 165, 165], [173, 173, 173], [169, 169, 169], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]]]), array([[[110, 110, 110], [110, 110, 110], [109, 109, 109], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]]]), array([[[165, 165, 165], [173, 173, 173], [169, 169, 169], ..., [255, 255, 255], [255, 255, 255], [255, 255, 255]]]), array([[[58, 58, 58], [52, 52, 52], [51, 51, 51], ..., [47, 47, 47], [55, 55, 55], [49, 49, 49]]]), array([[[ 74, 74, 74], [ 76, 76, 76], [ 71, 71, 71], ..., [110, 110, 110], [106, 106, 106], [108, 108, 108]]]), array([[[159, 159, 159], [118, 118, 118], [132, 132, 132], ..., [ 93, 93, 93], [ 95, 95, 95], [ 91, 91, 91]]]), array([[[165, 165, 165], [173, 173, 173], [169, 169, 169], ..., 這是trainx [array(0), array(0), array(0), array(0), array(1), array(1), array(0), array(0), array(0), array(0), array(1), array(1), array(0), array(0), array(0), array(0), array(1), array(1), array(0)] 這是trainy |
新蟲 (正式寫手)
| 2 | 1/1 | 返回列表 |
| 最具人氣熱帖推薦 [查看全部] | 作者 | 回/看 | 最后發(fā)表 | |
|---|---|---|---|---|
|
[考研]
|
旅行中的紫葡萄 2026-03-03 | 4/200 |
|
|---|---|---|---|---|
|
[考研] 化學 0703求調劑 總分293 一志愿211 +3 | 土土小蟲 2026-03-03 | 3/150 |
|
|
[考博] 26申博 求博導 +3 | 愛讀書的小帥 2026-02-28 | 5/250 |
|
|
[考研] 290求調劑 +9 | ErMiao1020 2026-03-02 | 9/450 |
|
|
[考研] 求調劑 +4 | Guo_yuxuan 2026-03-02 | 5/250 |
|
|
[考研] 沒上岸的看過來 +3 | tangxiaotian 2026-03-01 | 3/150 |
|
|
[考研] 調劑 +5 | 13853210211 2026-03-02 | 7/350 |
|
|
[考研] 338求調劑 +5 | 18162027187 2026-03-02 | 6/300 |
|
|
[考研] 0854復試調劑 276 +5 | wmm9 2026-03-01 | 7/350 |
|
|
[考研] 282求調劑 +4 | 2103240126 2026-03-02 | 7/350 |
|
|
[考研] 0856材料與化工,270求調劑 +11 | YXCT 2026-03-01 | 13/650 |
|
|
[考研] 085600材料工程一志愿中科大總分312求調劑 +9 | 吃宵夜1 2026-02-28 | 11/550 |
|
|
[考研] 261求調劑 +3 | 陸lh 2026-03-01 | 3/150 |
|
|
[考研] 一志愿中石油(華東)本科齊魯工業(yè)大學 +3 | 石能偉 2026-03-02 | 3/150 |
|
|
[考研] 265分求調劑不調專業(yè)和學校有行學上就 +6 | 禮堂丁真258 2026-02-28 | 9/450 |
|
|
[考博] 26申博 +4 | 想申博! 2026-02-26 | 6/300 |
|
|
[考研] 328求調劑 +3 | aaadim 2026-03-01 | 5/250 |
|
|
[考研] 304求調劑 +6 | 曼殊2266 2026-02-28 | 7/350 |
|
|
[考研] 調劑 +3 | 簡木ChuFront 2026-02-28 | 3/150 |
|
|
[考研] 304求調劑 +3 | 52hz~~ 2026-02-28 | 5/250 |
|