| 5 | 1/1 | 返回列表 |
| 查看: 3709 | 回復: 6 | |||
| 當前只顯示滿足指定條件的回帖,點擊這里查看本話題的所有回帖 | |||
[求助]
求助AttributeError: module ' ' has no attribute ''
|
|||
|
求助各位,運行代碼時出現(xiàn)AttributeError: module 'cs_gan.utils' has no attribute 'get_train_dataset',代碼中有from cs_gan import utils,查看utils.py中也有Function:get_train_dataset ,出現(xiàn)此問題該如何解決呢? 發(fā)自小木蟲Android客戶端 |
|
您好,使用dir(utils)后看到utils的內(nèi)置屬性里有_get_np_data,_get_ckpt_dir等,但是沒有g(shù)et_train_data,但是utils.py里面是有def get_train_dataset(data_processor, dataset, batch_size): """Creates the training data tensors.""" x_train = _get_np_data(data_processor, dataset, split='train') # Create the TF dataset. dataset = tf.data.Dataset.from_tensor_slices(x_train)的,此時還是報一樣的錯誤應該要怎么辦呢?謝謝您的幫助 |
|
大佬你好,請問運行代碼時出現(xiàn)AttributeError: module 'cs_gan.utils' has no attribute 'get_train_dataset',代碼中有from cs_gan import utils,查看utils.py中也有Function:get_train_dataset ,出現(xiàn)此問題該如何解決呢?謝謝 發(fā)自小木蟲Android客戶端 |
木蟲 (著名寫手)

送紅花一朵 |
您好,這個代碼是Yan Wu, Mihaela Rosca, Timothy Lillicrap Deep Compressed Sensing. ICML 2019的開源代碼(This is the example code for the following ICML 2019 paper. If you use the code here please cite this paper),網(wǎng)址https://github.com/deepmind/deepmind-research/tree/master/cs_gan 下面是報錯,運行后報錯的main_cs.py文件以及utils.py文件代碼 File "D:\deep compressed sensing\main_cs.py", line 76, in main images = utils.get_train_dataset(data_processor, FLAGS.dataset, AttributeError: module 'cs_gan.utils' has no attribute 'get_train_dataset' #main_cs.py """Training script.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from absl import app from absl import flags from absl import logging import tensorflow.compat.v1 as tf import tensorflow_probability as tfp from cs_gan import cs from cs_gan import file_utils from cs_gan import utils tfd = tfp.distributions flags.DEFINE_string( 'mode', 'recons', 'Model mode.') flags.DEFINE_integer( 'num_training_iterations', 10000000, 'Number of training iterations.') flags.DEFINE_integer( 'batch_size', 64, 'Training batch size.') flags.DEFINE_integer( 'num_measurements', 25, 'The number of measurements') flags.DEFINE_integer( 'num_latents', 100, 'The number of latents') flags.DEFINE_integer( 'num_z_iters', 3, 'The number of latent optimisation steps.') flags.DEFINE_float( 'z_step_size', 0.01, 'Step size for latent optimisation.') flags.DEFINE_string( 'z_project_method', 'norm', 'The method to project z.') flags.DEFINE_integer( 'summary_every_step', 1000, 'The interval at which to log debug ops.') flags.DEFINE_integer( 'export_every', 10, 'The interval at which to export samples.') flags.DEFINE_string( 'dataset', 'mnist', 'The dataset used for learning (cifar|mnist.') flags.DEFINE_float('learning_rate', 1e-4, 'Learning rate.') flags.DEFINE_string( 'output_dir', '/tmp/cs_gan/cs', 'Location where to save output files.') FLAGS = flags.FLAGS # Log info level (for Hooks). tf.logging.set_verbosity(tf.logging.INFO) def main(argv): del argv utils.make_output_dir(FLAGS.output_dir) data_processor = utils.DataProcessor() images = utils.get_train_dataset(data_processor, FLAGS.dataset, FLAGS.batch_size) logging.info('Learning rate: %d', FLAGS.learning_rate) # Construct optimizers. optimizer = tf.train.AdamOptimizer(FLAGS.learning_rate) # Create the networks and models. generator = utils.get_generator(FLAGS.dataset) metric_net = utils.get_metric_net(FLAGS.dataset, FLAGS.num_measurements) model = cs.CS(metric_net, generator, FLAGS.num_z_iters, FLAGS.z_step_size, FLAGS.z_project_method) prior = utils.make_prior(FLAGS.num_latents) generator_inputs = prior.sample(FLAGS.batch_size) model_output = model.connect(images, generator_inputs) optimization_components = model_output.optimization_components debug_ops = model_output.debug_ops reconstructions, _ = utils.optimise_and_sample( generator_inputs, model, images, is_training=False) global_step = tf.train.get_or_create_global_step() update_op = optimizer.minimize( optimization_components.loss, var_list=optimization_components.vars, global_step=global_step) sample_exporter = file_utils.FileExporter( os.path.join(FLAGS.output_dir, 'reconstructions')) # Hooks. debug_ops['it'] = global_step # Abort training on Nans. nan_hook = tf.train.NanTensorHook(optimization_components.loss) # Step counter. step_conter_hook = tf.train.StepCounterHook() checkpoint_saver_hook = tf.train.CheckpointSaverHook( checkpoint_dir=utils.get_ckpt_dir(FLAGS.output_dir), save_secs=10 * 60) loss_summary_saver_hook = tf.train.SummarySaverHook( save_steps=FLAGS.summary_every_step, output_dir=os.path.join(FLAGS.output_dir, 'summaries'), summary_op=utils.get_summaries(debug_ops)) hooks = [checkpoint_saver_hook, nan_hook, step_conter_hook, loss_summary_saver_hook] # Start training. with tf.train.MonitoredSession(hooks=hooks) as sess: logging.info('starting training') for i in range(FLAGS.num_training_iterations): sess.run(update_op) if i % FLAGS.export_every == 0: reconstructions_np, data_np = sess.run([reconstructions, images]) # Create an object which gets data and does the processing. data_np = data_processor.postprocess(data_np) reconstructions_np = data_processor.postprocess(reconstructions_np) sample_exporter.save(reconstructions_np, 'reconstructions') sample_exporter.save(data_np, 'data') if __name__ == '__main__': app.run(main) #utils.py """Tools for latent optimisation.""" import collections import os from absl import logging import numpy as np import tensorflow.compat.v1 as tf import tensorflow_probability as tfp from cs_gan import nets tfd = tfp.distributions class ModelOutputs( collections.namedtuple('AdversarialModelOutputs', ['optimization_components', 'debug_ops'])): """All the information produced by the adversarial module. Fields: * `optimization_components`: A dictionary. Each entry in this dictionary corresponds to a module to train using their own optimizer. The keys are names of the components, and the values are `common.OptimizationComponent` instances. The keys of this dict can be made keys of the configuration used by the main train loop, to define the configuration of the optimization details for each module. * `debug_ops`: A dictionary, from string to a scalar `tf.Tensor`. Quantities used for tracking training. """ class OptimizationComponent( collections.namedtuple('OptimizationComponent', ['loss', 'vars'])): """Information needed by the optimizer to train modules. Usage: `optimizer.minimize( opt_compoment.loss, var_list=opt_component.vars)` Fields: * `loss`: A `tf.Tensor` the loss of the module. * `vars`: A list of variables, the ones which will be used to minimize the loss. """ def cross_entropy_loss(logits, expected): """The cross entropy classification loss between logits and expected values. The loss proposed by the original GAN paper: https://arxiv.org/abs/1406.2661. Args: logits: a `tf.Tensor`, the model produced logits. expected: a `tf.Tensor`, the expected output. Returns: A scalar `tf.Tensor`, the average loss obtained on the given inputs. Raises: ValueError: if the logits do not have shape [batch_size, 2]. """ num_logits = logits.get_shape()[1] if num_logits != 2: raise ValueError(('Invalid number of logits for cross_entropy_loss! ' 'cross_entropy_loss supports only 2 output logits!')) return tf.reduce_mean( tf.nn.sparse_softmax_cross_entropy_with_logits( logits=logits, labels=expected)) def optimise_and_sample(init_z, module, data, is_training): """Optimising generator latent variables and sample.""" if module.num_z_iters == 0: z_final = init_z else: init_loop_vars = (0, _project_z(init_z, module.z_project_method)) loop_cond = lambda i, _: i < module.num_z_iters def loop_body(i, z): loop_samples = module.generator(z, is_training) gen_loss = module.gen_loss_fn(data, loop_samples) z_grad = tf.gradients(gen_loss, z)[0] z -= module.z_step_size * z_grad z = _project_z(z, module.z_project_method) return i + 1, z # Use the following static loop for debugging # z = init_z # for _ in xrange(num_z_iters): # _, z = loop_body(0, z) # z_final = z _, z_final = tf.while_loop(loop_cond, loop_body, init_loop_vars) return module.generator(z_final, is_training), z_final def get_optimisation_cost(initial_z, optimised_z): optimisation_cost = tf.reduce_mean( tf.reduce_sum((optimised_z - initial_z)**2, -1)) return optimisation_cost def _project_z(z, project_method='clip'): """To be used for projected gradient descent over z.""" if project_method == 'norm': z_p = tf.nn.l2_normalize(z, axis=-1) elif project_method == 'clip': z_p = tf.clip_by_value(z, -1, 1) else: raise ValueError('Unknown project_method: {}'.format(project_method)) return z_p class DataProcessor(object): def preprocess(self, x): return x * 2 - 1 def postprocess(self, x): return (x + 1) / 2. def _get_np_data(data_processor, dataset, split='train'): """Get the dataset as numpy arrays.""" index = 0 if split == 'train' else 1 if dataset == 'mnist': # Construct the dataset. x, _ = tf.keras.datasets.mnist.load_data()[index] # Note: tf dataset is binary so we convert it to float. x = x.astype(np.float32) x = x / 255. x = x.reshape((-1, 28, 28, 1)) if dataset == 'cifar': x, _ = tf.keras.datasets.cifar10.load_data()[index] x = x.astype(np.float32) x = x / 255. if data_processor: # Normalize data if a processor is given. x = data_processor.preprocess(x) return x def make_output_dir(output_dir): logging.info('Creating output dir %s', output_dir) if not tf.gfile.IsDirectory(output_dir): tf.gfile.MakeDirs(output_dir) def get_ckpt_dir(output_dir): ckpt_dir = os.path.join(output_dir, 'ckpt') if not tf.gfile.IsDirectory(ckpt_dir): tf.gfile.MakeDirs(ckpt_dir) return ckpt_dir def get_real_data_for_eval(num_eval_samples, dataset, split='valid'): data = _get_np_data(data_processor=None, dataset=dataset, split=split) data = data[:num_eval_samples] return tf.constant(data) def get_summaries(ops): summaries = [] for name, op in ops.items(): # Ensure to log the value ops before writing them in the summary. # We do this instead of a hook to ensure IS/FID are never computed twice. print_op = tf.print(name, [op], output_stream=tf.logging.info) with tf.control_dependencies([print_op]): summary = tf.summary.scalar(name, op) summaries.append(summary) return summaries def get_train_dataset(data_processor, dataset, batch_size): """Creates the training data tensors.""" x_train = _get_np_data(data_processor, dataset, split='train') # Create the TF dataset. dataset = tf.data.Dataset.from_tensor_slices(x_train) # Shuffle and repeat the dataset for training. # This is required because we want to do multiple passes through the entire # dataset when training. dataset = dataset.shuffle(100000).repeat() # Batch the data and return the data batch. one_shot_iterator = dataset.batch(batch_size).make_one_shot_iterator() data_batch = one_shot_iterator.get_next() return data_batch def get_generator(dataset): if dataset == 'mnist': return nets.MLPGeneratorNet() if dataset == 'cifar': return nets.SNGenNet() def get_metric_net(dataset, num_outputs=2): if dataset == 'mnist': return nets.MLPMetricNet(num_outputs) if dataset == 'cifar': return nets.SNMetricNet(num_outputs) def make_prior(num_latents): # Zero mean, unit variance prior. prior_mean = tf.zeros(shape=(num_latents), dtype=tf.float32) prior_scale = tf.ones(shape=(num_latents), dtype=tf.float32) return tfd.Normal(loc=prior_mean, scale=prior_scale) |
| 最具人氣熱帖推薦 [查看全部] | 作者 | 回/看 | 最后發(fā)表 | |
|---|---|---|---|---|
|
[考研] 材料與化工,291分,求調(diào)劑 +13 | 咕嚕咕嚕123123 2026-03-04 | 15/750 |
|
|---|---|---|---|---|
|
[考研] 290 材料與化工求調(diào)劑 +7 | Nebulala 2026-03-08 | 7/350 |
|
|
[考研] 歡迎211本科同學,過A區(qū)國家線,A區(qū)非偏遠一本,交叉學科課題組 +21 | lisimayy 2026-03-04 | 33/1650 |
|
|
[考研] 313求調(diào)劑 +3 | Yyt楊1 2026-03-07 | 4/200 |
|
|
[考研] 086000生物與醫(yī)藥319分求調(diào)劑 +3 | Tolkien 2026-03-07 | 3/150 |
|
|
[考研] 【求調(diào)劑】293分環(huán)境工程求調(diào)劑材料/化工,服從調(diào)劑,抗壓能力強! +12 | xiiiia 2026-03-04 | 13/650 |
|
|
[考研] 085700資環(huán)求調(diào)劑,初始279,六級已過,英語能力強 +4 | 085700資環(huán)調(diào)劑 2026-03-03 | 5/250 |
|
|
[考研] 考研一志愿長安大學材料與化工309分請求調(diào)劑 +4 | dtdxzxx 2026-03-06 | 6/300 |
|
|
[考研]
|
程晴之 2026-03-06 | 6/300 |
|
|
[考研] 求調(diào)劑推薦 +4 | 微辣不吃 2026-03-06 | 4/200 |
|
|
[考研] 085600材料調(diào)劑 總分330 +6 | 池池丶 2026-03-03 | 6/300 |
|
|
[考研] 材料考研339求調(diào)劑 +3 | Karry*^_^* 2026-03-04 | 3/150 |
|
|
[考博] 26申博-目前4篇SCI一作 +4 | chen_2024 2026-03-02 | 4/200 |
|
|
[考研] 材料085601一志愿哈工大317 +4 | 壓迫感行 2026-03-04 | 4/200 |
|
|
[考研] 281電子信息求調(diào)劑 +5 | jhtfeybgj 2026-03-02 | 9/450 |
|
|
[考研] 0856材料專碩274能調(diào)劑去哪里? +3 | 22735 2026-03-04 | 4/200 |
|
|
[考研] 286 +6 | ksncj 2026-03-04 | 6/300 |
|
|
[考研] 291求調(diào)劑 +4 | Afy123456 2026-03-03 | 7/350 |
|
|
[考研] 306求調(diào)劑 +7 | 張張張張oo 2026-03-03 | 7/350 |
|
|
[考研] 278求調(diào)劑 +3 | 滿天星11_22 2026-03-02 | 3/150 |
|