This repository has been archived by the owner on Nov 3, 2023. It is now read-only.
v1.2.0 #3620
stephenroller
started this conversation in
General
v1.2.0
#3620
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
This Saturday marks the 4 year anniversary since the initial release of ParlAI. I'd like to offer my sincere gratitude to our users, our contributors, and all of the core development team. ParlAI wouldn't be what it is without all of you. -@stephenroller
Major new features
Background Preprocessing
Improve your training speeds by 1.25x-5.0x by switching from
--num-workers 0
to--num-workers N
. See our Speeding up training docs for details. (#3527, #3586, #3575, #3533, #3389)(Beta) Support for torch.jit
Deploy faster models by exporting models with TorchScript. Currently limited to BART models only. (#3459)
Support for T5
We now have agents for Google's T5 models (#3519)
Opt Presets
Opt presets. Easily use prepackaged opt files as shorthand for long command line arguments (#3564)
Log/validate/stop based on number of steps
Get up to a 10% speedup of distributed training by switching from
-vtim
or-veps
to-vstep
(#3379, #3555)Backwards-incompatible changes
model
argument to use create_agent #3472)Minor improvements
accuracy
with--skip-generation true
(Add new token_em metric #3497)Bugfixes
rewards
field as a number (Support a rewards field in parlai format #3517)dyn_batch_idx
" #3505) and quality improvements (Messages instead of dicts for get() in ED, DNLI #3496)Crowdsourcing improvements
randomize_conversations
to ACUTE blueprint + use #3528)New Datasets
Doc improvements
Developer improvements
This discussion was created from the release v1.2.0.
Beta Was this translation helpful? Give feedback.
All reactions