Skip to content

Device placement is logged by default #88

@kiudee

Description

@kiudee

We have a utility function configure_numpy_keras which is used in some of the experiment scripts:

def configure_numpy_keras(seed=42):
tf.set_random_seed(seed)
os.environ["KERAS_BACKEND"] = "tensorflow"
devices = [x.name for x in device_lib.list_local_devices()]
logger = logging.getLogger("ConfigureKeras")
logger.info("Devices {}".format(devices))
n_gpus = len([x.name for x in device_lib.list_local_devices() if x.device_type == 'GPU'])
if n_gpus == 0:
config = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1,
allow_soft_placement=True, log_device_placement=False,
device_count={'CPU': multiprocessing.cpu_count() - 2})
else:
config = tf.ConfigProto(allow_soft_placement=True,
log_device_placement=True, intra_op_parallelism_threads=2,
inter_op_parallelism_threads=2) # , gpu_options = gpu_options)
sess = tf.Session(config=config)
K.set_session(sess)
np.random.seed(seed)
logger.info("Number of GPUS {}".format(n_gpus))

It does the following:

  • Set random seeds
  • Sets the KERAS_BACKEND to Tensorflow
  • Checks the number of GPUs and sets the Tensorflow options accordingly
  • Creates a Tensorflow session for Keras to use

There are a few issues (and maybe more) with this:

  • Everything is set to hardcoded constants. Making it configurable is desirable.
  • log_device_placement is set to True, which can cause slowdowns due to logging and should be False by default.
  • It is not clear, if tensorflow_util.py is the correct location, if the function is only ever used in experiments.
  • It is not documented.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions