This paper presents the first framework to deliberately train neural networks for accuracy and agreement between feature attribution techniques: PEAR (Post hoc Explainer Agreement Regularizer). In addition to the conventional task loss, PEAR incorporates a correlation-based consensus loss that combines Pearson and Spearman correlation measures, promoting alignment across explainers like Grad and Integrated Gradients. By using a soft ranking approximation to address differentiability issues, the loss function is completely trainable by backpropagation. Tested on three OpenML tabular datasets, multilayer perceptrons trained using PEAR surpass linear baselines in accuracy and explanation consensus, and in certain instances, even compete with XGBoost. The findings advance reliable and interpretable AI by showing that consensus-aware training successfully reduces explanation disagreement while maintaining prediction performance.This paper presents the first framework to deliberately train neural networks for accuracy and agreement between feature attribution techniques: PEAR (Post hoc Explainer Agreement Regularizer). In addition to the conventional task loss, PEAR incorporates a correlation-based consensus loss that combines Pearson and Spearman correlation measures, promoting alignment across explainers like Grad and Integrated Gradients. By using a soft ranking approximation to address differentiability issues, the loss function is completely trainable by backpropagation. Tested on three OpenML tabular datasets, multilayer perceptrons trained using PEAR surpass linear baselines in accuracy and explanation consensus, and in certain instances, even compete with XGBoost. The findings advance reliable and interpretable AI by showing that consensus-aware training successfully reduces explanation disagreement while maintaining prediction performance.

Notes on Training Neural Networks for Consensus

6 min read

Abstract and 1. Introduction

1.1 Post Hoc Explanation

1.2 The Disagreement Problem

1.3 Encouraging Explanation Consensus

  1. Related Work

  2. Pear: Post HOC Explainer Agreement Regularizer

  3. The Efficacy of Consensus Training

    4.1 Agreement Metrics

    4.2 Improving Consensus Metrics

    [4.3 Consistency At What Cost?]()

    4.4 Are the Explanations Still Valuable?

    4.5 Consensus and Linearity

    4.6 Two Loss Terms

  4. Discussion

    5.1 Future Work

    5.2 Conclusion, Acknowledgements, and References

Appendix

3 PEAR: POST HOC EXPLAINER AGREEMENT REGULARIZER

Our contribution is the first effort to train models to be both accurate and to be explicitly regularized via consensus between local explainers. When neural networks are trained naturally (i.e. with a single task-specific loss term like cross-entropy), disagreement between post hoc explainers often arises. Therefore, we include an additional loss term to measure the amount of explainer disagreement during training to encourage consensus between explanations. Since human-aligned notions of explanation consensus can be captured by more than one agreement metric (listed in A.3), we aim to improve several agreement metrics with one loss function.[2]

\ Our consensus loss term is a convex combination of the Pearson and Spearman correlation measurements between the vectors of attribution scores (Spearman correlation is just the Pearson correlation on the ranks of a vector).

\ To paint a clearer picture of the need for two terms in the loss, consider the examples shown in Figure 3. In the upper example, the raw feature scores are very similar and the Pearson correlation coefficient is in fact 1 (to machine precision). However, when we rank these scores by magnitude, there is a big difference in their ranks as indicated by the Spearman value. Likewise, in the lower portion of Figure 3 we show that two explanations with identical magnitudes will show a low Pearson correlation coefficient. Since some of the metrics we use to measure disagreement involve ranking and others do not, we conclude that a mixture of these two terms in the loss is appropriate.

\ While the example in Figure 3 shows two explanation vectors with similar scale, different explanation methods do not always

\ Figure 2: Our loss function measures the task loss between the model outputs and ground truth (task loss), as well as the disagreement between explainers (consensus loss). The weight given to the consensus loss term is controlled by a hyperparameter 𝜆. The consensus loss term term is a convex combination of the Spearman and Pearson correlation measurements between feature importance scores, since increasing both rank correlation (Spearman) and raw-score correlation (Pearson) are useful for improving explainer consensus on our many agreement metrics.

\ Figure 3: Example feature attribution vectors where Pearson and Spearman show starkly different scores. Recall, both Pearson and Spearman correlation range from −1 to +1. Both of these pairs of vectors satisfy some human-aligned notions of consensus. But in each circumstance, one of the correlation metrics gives a low similarity score. Thus, in order to successfully encourage explainer consensus (by all of our metrics), we use both types of correlation in our consensus loss term.

\ align. Some explainers have the sums of their attribution scores constrained by various rules, whereas other explainers have no such constraints. The correlation measurements we use in our loss provide more latitude when comparing explainers than a direct difference measurement like mean absolute error or mean squared error, allowing our correlation measurement.

\

\ We refer to the first term in the loss function as the task loss, or ℓtask, and for our classification tasks we use cross-entropy loss. A graphical depiction of the flow from data to loss value is shown in Figure 2. Formally, our complete loss function can be expressed as follows with two hyperparameters 𝜆, 𝜇 ∈ [0, 1]. We weight the influence of our consensus term with 𝜆, so lower values give more priority to task loss. We weight the influence between the two explanation correlation terms with 𝜇, so lower values give more weight to Pearson correlation and higher values give more weight to Spearman correlation.

\

3.1 Choosing a Pair of Explainer

The consensus loss term is defined for any two explainers in general, but since we train with standard backpropagation we need these explainers to be differentiable. With this constraint in mind, and with some intuition about the objective of improving agreement metrics, we choose to train for consensus between Grad and IntGrad. If Grad and IntGrad align, then the function should become more locally linear in logit space. IntGrad computes the average gradient along a path in input space toward each point being explained. So, if we train the model to have a local gradient at each point (Grad) closer to the average gradient along a path to the point (IntGrad), then perhaps an easy way for the model to accomplish that training objective would be for the gradient along the whole path to equal the local gradient from Grad. This may push the model to be more similar to a linear model. This is something we investigate with qualitative and quantitative analysis in Section 4.

3.2 Differentiability

On the note of differentiability, the ranking function 𝑅 is not differentiable. We substitute a soft ranking function from the torchsort package [3]. This provides a floating point approximation of the ordering of a vector rather than an exact integer computation of the ordering of a vector, which allows for differentiation

4 THE EFFICACY OF CONSENSUS TRAINING

In this section we present each experiment with the hypothesis it is designed to test. The datasets we use for our experiments are Bank Marketing, California Housing, and Electricity, three binary classification datasets available on the OpenML database [39]. For each dataset, we use a linear model’s performance (logistic regression) as a lower bound of realistic performance because linear models are considered inherently explainable.

\ The models we train to study the impact of our consensus loss term are multilayer perceptrons (MLPs). While the field of tabular deep learning is still growing, and MLPs may be an unlikely choice for most data scientists on tabular data, deep networks provide the flexibility to adapt training loops for multiple objectives [1, 10, 17, 28, 31, 35]. We also verify that our MLPs outperform linear models on each dataset, because if deep models trained to reach consensus are less accurate than a linear model, we would be better off using the linear model.

\ We include XGBoost [6] as a point of comparison for our approach, as it has become a widely popular method with high performance and strong consensus metrics on many tabular datasets (figures in Appendix A.7). There are cases where we achieve more explainer consensus than XGBoost, but this point is tangential as we are invested in exploring a loss for training neural networks.

\ For further details on our datasets and model training hyperparameters, see Appendices A.1 and A.2.

\

:::info Authors:

(1) Avi Schwarzschild, University of Maryland, College Park, Maryland, USA and Work completed while working at Arthur (avi1umd.edu);

(2) Max Cembalest, Arthur, New York City, New York, USA;

(3) Karthik Rao, Arthur, New York City, New York, USA;

(4) Keegan Hines, Arthur, New York City, New York, USA;

(5) John Dickerson†, Arthur, New York City, New York, USA (john@arthur.ai).

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

[2] The PEAR package will be publicly for download on the Package Installer for Python (pip), and it is also available upon request from the authors.

\ [3] When more than one of the entries have the same magnitude, they get a common ranking value equal to the average rank if they were ordered arbitrarily.

Market Opportunity
Wink Logo
Wink Price(LIKE)
--
----
USD
Wink (LIKE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Is Doge Losing Steam As Traders Choose Pepeto For The Best Crypto Investment?

Is Doge Losing Steam As Traders Choose Pepeto For The Best Crypto Investment?

The post Is Doge Losing Steam As Traders Choose Pepeto For The Best Crypto Investment? appeared on BitcoinEthereumNews.com. Crypto News 17 September 2025 | 17:39 Is dogecoin really fading? As traders hunt the best crypto to buy now and weigh 2025 picks, Dogecoin (DOGE) still owns the meme coin spotlight, yet upside looks capped, today’s Dogecoin price prediction says as much. Attention is shifting to projects that blend culture with real on-chain tools. Buyers searching “best crypto to buy now” want shipped products, audits, and transparent tokenomics. That frames the true matchup: dogecoin vs. Pepeto. Enter Pepeto (PEPETO), an Ethereum-based memecoin with working rails: PepetoSwap, a zero-fee DEX, plus Pepeto Bridge for smooth cross-chain moves. By fusing story with tools people can use now, and speaking directly to crypto presale 2025 demand, Pepeto puts utility, clarity, and distribution in front. In a market where legacy meme coin leaders risk drifting on sentiment, Pepeto’s execution gives it a real seat in the “best crypto to buy now” debate. First, a quick look at why dogecoin may be losing altitude. Dogecoin Price Prediction: Is Doge Really Fading? Remember when dogecoin made crypto feel simple? In 2013, DOGE turned a meme into money and a loose forum into a movement. A decade on, the nonstop momentum has cooled; the backdrop is different, and the market is far more selective. With DOGE circling ~$0.268, the tape reads bearish-to-neutral for the next few weeks: hold the $0.26 shelf on daily closes and expect choppy range-trading toward $0.29–$0.30 where rallies keep stalling; lose $0.26 decisively and momentum often bleeds into $0.245 with risk of a deeper probe toward $0.22–$0.21; reclaim $0.30 on a clean daily close and the downside bias is likely neutralized, opening room for a squeeze into the low-$0.30s. Source: CoinMarketcap / TradingView Beyond the dogecoin price prediction, DOGE still centers on payments and lacks native smart contracts; ZK-proof verification is proposed,…
Share
BitcoinEthereumNews2025/09/18 00:14
PBOC sets USD/CNY reference rate at 6.9590 vs. 6.9570 previous

PBOC sets USD/CNY reference rate at 6.9590 vs. 6.9570 previous

The post PBOC sets USD/CNY reference rate at 6.9590 vs. 6.9570 previous appeared on BitcoinEthereumNews.com. On Friday, the People’s Bank of China (PBOC) sets the
Share
BitcoinEthereumNews2026/02/06 09:28
Taiko and Chainlink to Unleash Reliable Onchain Data for DeFi Ecosystem

Taiko and Chainlink to Unleash Reliable Onchain Data for DeFi Ecosystem

Taiko and Chainlink Data Streams to deliver secure, high-speed onchain data by empowering next-generation DeFi protocols and institutional-grade adoption.
Share
Blockchainreporter2025/09/18 06:10