You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, the phone segmentation results (R-value 73.23) are lower than the ones in the paper (R-value 81.98) for Librispeech dataset. I seem to be missing something. Could you please take a look and share the optimal parameters?
Thank you.
The text was updated successfully, but these errors were encountered:
Based on the README and the paper I am using the following hyper-parameters for training the variable-rate CPC
python cpc/train.py --pathDB /path_datasets/LibriSpeech/train-clean-100 --file_extension '.flac' --pathCheckpoint ./hcpc --normMode layerNorm --dropout --n_process_loader 1 --batchSizeGPU 32 --CPCCTC --nPredicts 12 --CPCCTCNumMatched 12 --limitNegsInBatch 8 --nEpoch 50 --nGPU 1 --nLevelsGRU 2 --schedulerRamp 10 --multiLevel --segmentationMode boundaryPredictor --nPredictsSegment 2 --CPCCTCNumMatchedSegment 2 --adjacentNegatives --targetQuantizerSegment robustKmeans
However, the phone segmentation results (R-value 73.23) are lower than the ones in the paper (R-value 81.98) for Librispeech dataset. I seem to be missing something. Could you please take a look and share the optimal parameters?
Thank you.
The text was updated successfully, but these errors were encountered: