精通
英语
和
开源
,
擅长
开发
与
培训
,
胸怀四海
第一信赖
In steps/nnet2/train_multisplice_accel2.sh,
--num-jobs-initial Number of parallel jobs to use for neural net training, at the start
--num-jobs-final Number of parallel jobs to use for neural net training, at the end
If the set is --num-jobs-initial 3 --num-jobs-final 18, What's the minimum size of the GPU memory does it need?
If we change the options to --num-jobs-initial 2 --num-jobs-final 4, Is there a large influence on the recognition rate In addition to the speed will be slow? 3G GPU memory is Enough ?。在steps / nnet2 / train_multisplice_accel2.sh中,--
num-jobs-initial首先用于神经网络训练
的并行作业数--num-jobs-final用于神经网络训练的并行作业数end
如果设置为--num-jobs-initial 3 --num-jobs-final 18,它需要的GPU内存的最小大小是多少?
如果将选项更改为--num-jobs-initial 2 --num-jobs-final 4,对识别率是否有较大影响?除了速度会很慢?3G GPU内存足够了吗?
If you have only 1 GPU you should set --num-jobs-initial and --num-jobs-final to 1, of course it will be much slower.
It's not designed to share the GPU between processes.如果只有1个GPU,则应将--num-jobs-initial和 --num-jobs-final设置为1,这当然会慢得多。
它并非旨在在进程之间共享GPU。
If we have just one server with two GPUs, should we set the --num-jobs-initial=1 and
--num-jobs-final=2 ?
Any other parameters need to be modified in steps/nnet2/train_multisplice_accel2.sh?如果我们只有一台服务器带有两个GPU,是否应该设置--num-jobs-initial = 1和
--num-jobs-final = 2?
是否需要在步骤/nnet2/train_multisplice_accel2.sh中修改其他参数?
I would set both --num-jobs-initial and --num-jobs final to 2 in that case.
You don't need to modify any other parameters. Because the learning rates
supplied to the script are the "effective learning rates" rather than the
actual learning rates, it automatically adjusts them when you change the
number of jobs.在这种情况下,我会将--num-jobs-initial和--num-jobs final都设置为2。
您不需要修改任何其他参数。因为提供给脚本的学习率是“有效学习率”,而不是实际学习率,所以当您更改工作数量时,它会自动进行调整。