🚀 Feature
I’d like to add a new NLP metric: Perplexity under ignite.metrics.nlp, and expose it in top-level ignite.metrics.
Motivation
Perplexity is a standard metric for language modeling, and having it built into Ignite would make evaluation pipelines easier and more consistent with other built-in metrics.
Proposal
- Add
Perplexity metric implementation in ignite.metrics.nlp.perplexity.
- Export it from:
ignite.metrics.nlp
ignite.metrics (top-level import path)
- Include tests for:
- correctness vs manual computation
- token-weighted accumulation across updates
- reset behavior
- input shape validation
- edge cases (single token / empty compute handling)
Alternatives considered
Users can currently compute perplexity manually from cross-entropy, but that is repetitive and error-prone across projects.
Additional context
I have a working branch and tests for this implementation and can open a PR right after this issue is created.
🚀 Feature
I’d like to add a new NLP metric:
Perplexityunderignite.metrics.nlp, and expose it in top-levelignite.metrics.Motivation
Perplexity is a standard metric for language modeling, and having it built into Ignite would make evaluation pipelines easier and more consistent with other built-in metrics.
Proposal
Perplexitymetric implementation inignite.metrics.nlp.perplexity.ignite.metrics.nlpignite.metrics(top-level import path)Alternatives considered
Users can currently compute perplexity manually from cross-entropy, but that is repetitive and error-prone across projects.
Additional context
I have a working branch and tests for this implementation and can open a PR right after this issue is created.