: Towards Measuring Class-wise Hardness through Modelling Class Semantics

F Cai, X Zhao, H Zhang, I Gurevych… - arXiv preprint arXiv …, 2024 - arxiv.org
arXiv preprint arXiv:2407.12512, 2024arxiv.org
Recent advances in measuring hardness-wise properties of data guide language models in
sample selection within low-resource scenarios. However, class-specific properties are
overlooked for task setup and learning. How will these properties influence model learning
and is it generalizable across datasets? To answer this question, this work formally initiates
the concept of $\textit {class-wise hardness} $. Experiments across eight natural language
understanding (NLU) datasets demonstrate a consistent hardness distribution across …
Recent advances in measuring hardness-wise properties of data guide language models in sample selection within low-resource scenarios. However, class-specific properties are overlooked for task setup and learning. How will these properties influence model learning and is it generalizable across datasets? To answer this question, this work formally initiates the concept of $\textit{class-wise hardness}$. Experiments across eight natural language understanding (NLU) datasets demonstrate a consistent hardness distribution across learning paradigms, models, and human judgment. Subsequent experiments unveil a notable challenge in measuring such class-wise hardness with instance-level metrics in previous works. To address this, we propose $\textit{GeoHard}$ for class-wise hardness measurement by modeling class geometry in the semantic embedding space. $\textit{GeoHard}$ surpasses instance-level metrics by over 59 percent on $\textit{Pearson}$'s correlation on measuring class-wise hardness. Our analysis theoretically and empirically underscores the generality of $\textit{GeoHard}$ as a fresh perspective on data diagnosis. Additionally, we showcase how understanding class-wise hardness can practically aid in improving task learning.
arxiv.org