精通
英语
和
开源
,
擅长
开发
与
培训
,
胸怀四海
第一信赖
服务方向
联系方式
I am trying out DNN training for my data, and after running run_raw_fmllr.sh successfully I am trying the run_dnn.sh script. However I got this error:我正在尝试对数据进行DNN培训,并且成功运行run_raw_fmllr.sh之后,我正在尝试run_dnn.sh脚本。但是我得到了这个错误:
copy-feats 'ark,s,cs:apply-cmvn --utt2spk=ark:data/test/split10/7/utt2spk scp:data/test/split10/7/cmvn.scp scp:data/test/split10/7/feats.scp ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri3b/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/test/split10/7/utt2spk "ark:cat exp/tri3b/decode/trans.* |" ark:- ark:- |' ark,scp:/Users/enzyme156/desktop/KALDI/data-fmllr-tri3b/test/data/feats_fmllr_test.7.ark,/Users/enzyme156/desktop/KALDI/data-fmllr-tri3b/test/data/feats_fmllr_test.7.scp
ERROR (copy-feats:TableWriter():util/kaldi-table-inl.h:1095) TableWriter: failed to write to ark,scp:/Users/enzyme156/desktop/KALDI/data-fmllr-tri3b/test/data/feats_fmllr_test.7.ark,/Users/enzyme156/desktop/KALDI/data-fmllr-tri3b/test/data/feats_fmllr_test.7.scp
ERROR (copy-feats:TableWriter():util/kaldi-table-inl.h:1095) TableWriter: failed to write to ark,scp:/Users/enzyme156/desktop/KALDI/data-fmllr-tri3b/test/data/feats_fmllr_test.7.ark,/Users/enzyme156/desktop/KALDI/data-fmllr-tri3b/test/data/feats_fmllr_test.7.scp
Make sure the directory
/Users/enzyme156/desktop/KALDI/data-fmllr-tri3b/test/data/
exists and is writable.确保目录
/ Users / enzyme156 / desktop / KALDI / data-fmllr-tri3b / test / data /
存在并且可写。
I tried what you suggested and the script got through the test folder, but when I got to the train folder I got this error:我尝试了您的建议,脚本通过了test文件夹,但是当我到达train文件夹时,出现此错误:
copy-feats 'ark,s,cs:apply-cmvn --utt2spk=ark:data/train/split10/6/utt2spk scp:data/train/split10/6/cmvn.scp scp:data/train/split10/6/feats.scp ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri3b/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split10/6/utt2spk "ark:cat exp/tri3b_ali/trans.* |" ark:- ark:- |' ark,scp:/Users/enzyme156/Desktop/KALDI/data-fmllr-tri3b/train/data/feats_fmllr_train.6.ark,/Users/enzyme156/Desktop/KALDI/data-fmllr-tri3b/train/data/feats_fmllr_train.6.scp
splice-feats --left-context=3 --right-context=3 ark:- ark:-
apply-cmvn --utt2spk=ark:data/train/split10/6/utt2spk scp:data/train/split10/6/cmvn.scp scp:data/train/split10/6/feats.scp ark:-
ERROR (apply-cmvn:Read():kaldi-matrix.cc:1345) Failed to read matrix from stream. : Expected "[", got EOF File position at start is -1, currently -1
WARNING (apply-cmvn:Read():util/kaldi-holder-inl.h:78) Exception caught reading Table object
WARNING (apply-cmvn:LoadCurrent():util/kaldi-table-inl.h:232) TableReader: failed to load object from /Users/enzyme156/Desktop/KALDI/data/mfcc/raw_mfcc_train.10.ark:933302
ERROR (apply-cmvn:Value():util/kaldi-table-inl.h:142) TableReader: failed to load object from /Users/enzyme156/Desktop/KALDI/data/mfcc/raw_mfcc_train.10.ark:933302 (to suppress this error, add the permissive (p, ) option to the rspecifier.
ERROR (apply-cmvn:Value():util/kaldi-table-inl.h:142) TableReader: failed to load object from /Users/enzyme156/Desktop/KALDI/data/mfcc/raw_mfcc_train.10.ark:933302 (to suppress this error, add the permissive (p, ) option to the rspecifier.
It looks like /Users/enzyme156/Desktop/KALDI/data/mfcc/raw_mfcc_train.10.ark
does not exist, maybe because it was deleted or moved.看起来/Users/enzyme156/Desktop/KALDI/data/mfcc/raw_mfcc_train.10.ark
不存在,可能是因为它已被删除或移动。
I have solved this problem, but ran into another error:我已经解决了这个问题,但是遇到了另一个错误:
local/run_dnn.sh: line 51: exp/dnn4b_pretrain-dbn_dnn/log/pretrain_dbn.log: No such file or directory
The code that called the file:调用文件的代码:
if [ $stage -le 1 ]; then
# Pre-train DBN, i.e. a stack of RBMs (small database, smaller DNN)
dir=exp/dnn4b_pretrain-dbn
(tail --pid=$$ -F $dir/log/pretrain_dbn.log 2>/dev/null)& # forward log
$cuda_cmd $dir/log/pretrain_dbn.log \ steps/nnet/pretrain_dbn.sh --hid-dim 1024 --rbm-iter 20 $data_fmllr/train $dir || exit 1;
fi
Probably it's because the variable $cuda_cmd is not defined.
That part of the script is designed to be run on a GPU, and since I doubt
you have an NVidia GPU attached to your mac, I advise that you just don't
run the run_dnn.sh script.可能是因为未定义变量$ cuda_cmd。
该脚本的那部分旨在在GPU上运行,并且由于我怀疑您的Mac上装有NVidia GPU,因此建议您不要运行run_dnn.sh脚本。
Yes, you are right. cuda_cmd was not defined. However, if I'm not mistaken, one can run cuda code on the Mac CPU (http://www.drdobbs.com/parallel/running-cuda-code-natively-on-x86-proces/231500166) I tried setting $cuda_cmd to run.pl and run_dnn.sh was actually successful (as reported by the log). The model given, however, was in .nnet format - is it possible to use that in Kaldi's online recognizer? I checked the final.mdl file in the trained dnn model and it looks just like the topo file and not like other non-nnet model's final.mdl files at all.你是对的。未定义cuda_cmd。但是,如果我没记错的话,可以在Mac CPU上运行cuda代码(http://www.drdobbs.com/parallel/running-cuda-code-natively-on-x86-proces/231500166)我尝试设置$ cuda_cmd到run.pl和run_dnn.sh实际上是成功的(如日志所报告)。但是,给出的模型为.nnet格式-是否可以在Kaldi的在线识别器中使用该模型?我在训练有素的dnn模型中检查了final.mdl文件,它看起来像是topo文件,完全不同于其他非nnet模型的final.mdl文件。
If you don't have a GPU it runs on the CPU by default, but it's a lot
slower.
The example scripts should show how to decode with those models, it's part
of run_dnn.sh typically.如果您没有GPU,则默认情况下它会在CPU上运行,但速度会慢很多。
示例脚本应显示如何使用这些模型进行解码,它通常是run_dnn.sh的一部分。