<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Insights | Chenglu Zhu</title><link>https://hzzcl.github.io/resume.io/insights/</link><atom:link href="https://hzzcl.github.io/resume.io/insights/index.xml" rel="self" type="application/rss+xml"/><description>Insights</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Thu, 29 Jan 2026 00:00:00 +0000</lastBuildDate><item><title>Research on Robustness and Generalization Learning Theory in Pathological Image Analysis</title><link>https://hzzcl.github.io/resume.io/insights/research-on-robustness-and-generalization-learning-theory-in-pathological-image-analysis/</link><pubDate>Thu, 29 Jan 2026 00:00:00 +0000</pubDate><guid>https://hzzcl.github.io/resume.io/insights/research-on-robustness-and-generalization-learning-theory-in-pathological-image-analysis/</guid><description>&lt;p>Clinical translation of pathological image analysis faces three fundamental theoretical hurdles: mixed data biases (co-occurring noise and long-tail distributions), cross-center domain shifts, and the need for unsupervised annotation.&lt;/p>
&lt;p>This project addresses these issues through &lt;strong>gradient optimization dynamics&lt;/strong>, &lt;strong>test-time adaptation mechanisms&lt;/strong>, and &lt;strong>self-supervised representation mining&lt;/strong>. Below, we detail our latest findings in gradient-aware decoupling (DAR), stable test-time training (Stable TTT), unsupervised cell recognition (PSM), and robustness benchmarking for pathological foundation models.&lt;/p>
&lt;h2 id="1-gradient-aware-decoupling-for-mixed-data-bias-dar">1. Gradient-Aware Decoupling for Mixed Data Bias (DAR)&lt;/h2>
&lt;p>&lt;strong>Original Paper:&lt;/strong> &lt;em>Gradient-aware learning for joint biases: Label noise and class imbalance&lt;/em> (Neural Networks 2024)
&lt;!-- raw HTML omitted -->&lt;strong>Authors:&lt;/strong> Shichuan Zhang, Chenglu Zhu, Honglin Li, Jiatong Cai, Lin Yang&lt;/p>
&lt;h3 id="the-challenge">The Challenge&lt;/h3>
&lt;p>Real-world pathological data rarely comes clean; it often suffers from simultaneous &lt;strong>label noise&lt;/strong> and &lt;strong>class imbalance&lt;/strong> (long-tail distribution). This &amp;ldquo;mixed bias&amp;rdquo; creates a conflict for traditional handling strategies. For instance, reweighting methods designed for class imbalance can inadvertently amplify the influence of noisy samples, causing the model to overfit incorrect labels and degrading performance.&lt;/p>
&lt;h3 id="our-approach">Our Approach&lt;/h3>
&lt;p>We observed that the feature extractor (Encoder) and the Classifier react differently to these biases: the Encoder is more sensitive to noise, while the Classifier is more sensitive to imbalance. Based on this, we developed the &lt;strong>Gradient-aware Decoupling and Regulation (DAR)&lt;/strong> framework:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Probe Data Guidance:&lt;/strong> Leveraging the &amp;ldquo;early-learning&amp;rdquo; phenomenon, we use loss distribution during the initial training phase to automatically identify a small set of high-confidence, class-balanced samples (Probe Data).&lt;/li>
&lt;li>&lt;strong>Gradient Rectification:&lt;/strong> We train an auxiliary network on this probe data to generate reference gradients. By calculating the cosine similarity between the main network&amp;rsquo;s gradients and these reference gradients, we generate a direction matrix $\Omega$.&lt;/li>
&lt;li>&lt;strong>Decoupled Updates:&lt;/strong> Using $\Omega$, we intervene in the parameter updates for the Encoder and Classifier separately.&lt;/li>
&lt;/ul>
&lt;!-- raw HTML omitted -->
&lt;h3 id="results">Results&lt;/h3>
&lt;p>In a controlled CIFAR-10 environment with extreme mixed bias (40% label noise + 0.02 imbalance factor), DAR demonstrated strong resilience:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Accuracy:&lt;/strong> Improved from a baseline (ERM) of &lt;strong>52.34%&lt;/strong> to &lt;strong>74.48%&lt;/strong>, significantly outperforming existing methods like L2R and HAR.&lt;/li>
&lt;li>&lt;strong>Real-world Validation:&lt;/strong> On the Clothing1M dataset, DAR achieved an accuracy of &lt;strong>76.37%&lt;/strong>.&lt;/li>
&lt;/ul>
&lt;div style="overflow-x: auto; display: block; width: 100%;">
&lt;table>
&lt;thead>
&lt;tr>
&lt;th style="text-align:left">Method&lt;/th>
&lt;th style="text-align:center">C10 (N=0.2, I=0.1)&lt;/th>
&lt;th style="text-align:center">C10 (N=0.2, I=0.05)&lt;/th>
&lt;th style="text-align:center">C10 (N=0.2, I=0.02)&lt;/th>
&lt;th style="text-align:center">C10 (N=0.4, I=0.1)&lt;/th>
&lt;th style="text-align:center">C10 (N=0.4, I=0.05)&lt;/th>
&lt;th style="text-align:center">C10 (N=0.4, I=0.02)&lt;/th>
&lt;th style="text-align:center">C100 (N=0.2, I=0.1)&lt;/th>
&lt;th style="text-align:center">C100 (N=0.2, I=0.05)&lt;/th>
&lt;th style="text-align:center">C100 (N=0.2, I=0.02)&lt;/th>
&lt;th>&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td style="text-align:left">ERM&lt;/td>
&lt;td style="text-align:center">72.18 ± 0.27&lt;/td>
&lt;td style="text-align:center">67.68 ± 0.61&lt;/td>
&lt;td style="text-align:center">61.81 ± 0.63&lt;/td>
&lt;td style="text-align:center">62.21 ± 1.73&lt;/td>
&lt;td style="text-align:center">59.21 ± 2.32&lt;/td>
&lt;td style="text-align:center">52.34 ± 1.04&lt;/td>
&lt;td style="text-align:center">31.16 ± 0.29&lt;/td>
&lt;td style="text-align:center">27.94 ± 1.14&lt;/td>
&lt;td style="text-align:center">25.58 ± 0.72&lt;/td>
&lt;td>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">FL&lt;/td>
&lt;td style="text-align:center">69.40 ± 0.56&lt;/td>
&lt;td style="text-align:center">66.17 ± 0.60&lt;/td>
&lt;td style="text-align:center">56.37 ± 2.53&lt;/td>
&lt;td style="text-align:center">61.55 ± 0.40&lt;/td>
&lt;td style="text-align:center">55.89 ± 2.17&lt;/td>
&lt;td style="text-align:center">48.64 ± 1.66&lt;/td>
&lt;td style="text-align:center">31.17 ± 0.91&lt;/td>
&lt;td style="text-align:center">27.40 ± 1.15&lt;/td>
&lt;td style="text-align:center">24.36 ± 0.82&lt;/td>
&lt;td>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">GCE&lt;/td>
&lt;td style="text-align:center">73.76 ± 0.26&lt;/td>
&lt;td style="text-align:center">59.06 ± 0.78&lt;/td>
&lt;td style="text-align:center">55.19 ± 0.23&lt;/td>
&lt;td style="text-align:center">69.22 ± 0.33&lt;/td>
&lt;td style="text-align:center">59.79 ± 0.80&lt;/td>
&lt;td style="text-align:center">52.10 ± 0.37&lt;/td>
&lt;td style="text-align:center">26.97 ± 1.08&lt;/td>
&lt;td style="text-align:center">22.68 ± 0.26&lt;/td>
&lt;td style="text-align:center">17.57 ± 0.39&lt;/td>
&lt;td>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">L2R(w)&lt;/td>
&lt;td style="text-align:center">&lt;strong>82.95 ± 0.19&lt;/strong>&lt;/td>
&lt;td style="text-align:center">78.57 ± 0.27&lt;/td>
&lt;td style="text-align:center">68.54 ± 0.96&lt;/td>
&lt;td style="text-align:center">&lt;strong>78.10 ± 0.21&lt;/strong>&lt;/td>
&lt;td style="text-align:center">70.43 ± 0.64&lt;/td>
&lt;td style="text-align:center">57.25 ± 0.91&lt;/td>
&lt;td style="text-align:center">37.01 ± 0.29&lt;/td>
&lt;td style="text-align:center">33.33 ± 0.17&lt;/td>
&lt;td style="text-align:center">13.60 ± 0.37&lt;/td>
&lt;td>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">WN-Net(w)&lt;/td>
&lt;td style="text-align:center">76.83 ± 0.22&lt;/td>
&lt;td style="text-align:center">72.57 ± 0.50&lt;/td>
&lt;td style="text-align:center">65.00 ± 0.60&lt;/td>
&lt;td style="text-align:center">70.42 ± 0.18&lt;/td>
&lt;td style="text-align:center">61.68 ± 1.22&lt;/td>
&lt;td style="text-align:center">53.23 ± 0.91&lt;/td>
&lt;td style="text-align:center">38.81 ± 0.58&lt;/td>
&lt;td style="text-align:center">31.73 ± 2.30&lt;/td>
&lt;td style="text-align:center">26.88 ± 0.40&lt;/td>
&lt;td>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">HAR&lt;/td>
&lt;td style="text-align:center">82.36 ± 0.64&lt;/td>
&lt;td style="text-align:center">78.63 ± 0.86&lt;/td>
&lt;td style="text-align:center">70.76 ± 1.49&lt;/td>
&lt;td style="text-align:center">76.80 ± 1.32&lt;/td>
&lt;td style="text-align:center">67.70 ± 2.03&lt;/td>
&lt;td style="text-align:center">54.55 ± 2.39&lt;/td>
&lt;td style="text-align:center">43.24 ± 1.59&lt;/td>
&lt;td style="text-align:center">36.38 ± 0.5&lt;/td>
&lt;td style="text-align:center">28.68 ± 0.71&lt;/td>
&lt;td>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">AutoDO&lt;/td>
&lt;td style="text-align:center">78.36 ± 0.24&lt;/td>
&lt;td style="text-align:center">73.42 ± 0.64&lt;/td>
&lt;td style="text-align:center">65.44 ± 0.50&lt;/td>
&lt;td style="text-align:center">71.25 ± 0.42&lt;/td>
&lt;td style="text-align:center">66.14 ± 1.46&lt;/td>
&lt;td style="text-align:center">53.31 ± 2.02&lt;/td>
&lt;td style="text-align:center">39.43 ± 1.63&lt;/td>
&lt;td style="text-align:center">32.33 ± 0.58&lt;/td>
&lt;td style="text-align:center">23.01 ± 0.57&lt;/td>
&lt;td>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">DAR(w/o)&lt;/td>
&lt;td style="text-align:center">82.34 ± 0.30&lt;/td>
&lt;td style="text-align:center">78.73 ± 0.64&lt;/td>
&lt;td style="text-align:center">72.26 ± 0.63&lt;/td>
&lt;td style="text-align:center">76.84 ± 0.26&lt;/td>
&lt;td style="text-align:center">72.31 ± 0.46&lt;/td>
&lt;td style="text-align:center">62.18 ± 0.70&lt;/td>
&lt;td style="text-align:center">&lt;strong>46.01 ± 0.54&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>39.49 ± 0.64&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>31.42 ± 0.97&lt;/strong>&lt;/td>
&lt;td>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">DAR(w)&lt;/td>
&lt;td style="text-align:center">82.79 ± 0.13&lt;/td>
&lt;td style="text-align:center">&lt;strong>79.50 ± 0.31&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>74.83 ± 0.79&lt;/strong>&lt;/td>
&lt;td style="text-align:center">77.39 ± 0.33&lt;/td>
&lt;td style="text-align:center">&lt;strong>74.48 ± 0.94&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>67.63 ± 0.46&lt;/strong>&lt;/td>
&lt;td style="text-align:center">45.03 ± 0.53&lt;/td>
&lt;td style="text-align:center">38.02 ± 0.39&lt;/td>
&lt;td style="text-align:center">31.17 ± 0.77&lt;/td>
&lt;td>&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;/div>
&lt;div style="text-align: center; font-size: 0.6em; color: #555; margin-top: 5px;">
Table 1: The accuracy results on CIFAR10 and CIFAR100. The training sets are with various noise ratios (N) and imbalance factors (I).
&lt;/div>
&lt;hr>
&lt;h2 id="2-stable-test-time-training-for-cross-center-generalization-stable-ttt">2. Stable Test-Time Training for Cross-Center Generalization (Stable TTT)&lt;/h2>
&lt;p>&lt;strong>Original Paper:&lt;/strong> &lt;em>Stable Test-Time Training for Semantic Segmentation with Output Contrastive Loss&lt;/em> (ICASSP 2025)
&lt;!-- raw HTML omitted -->&lt;strong>Authors:&lt;/strong> Yunlong Zhang, Zhongyi Shui, Honglin Li, Yuxuan Sun, Chenglu Zhu, Lin Yang&lt;/p>
&lt;h3 id="the-challenge-1">The Challenge&lt;/h3>
&lt;p>Pathology models often fail when deployed across different centers due to variations in stain styles and scanners (&lt;strong>domain drift&lt;/strong>). Existing Test-Time Training (TTT) methods for segmentation are computationally heavy and prone to &amp;ldquo;model collapse&amp;rdquo;—where the model starts predicting the background class for everything.&lt;/p>
&lt;h3 id="our-approach-1">Our Approach&lt;/h3>
&lt;p>We propose a lightweight strategy using &lt;strong>Output Contrastive Loss (OCL)&lt;/strong>:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Output Space Contrast:&lt;/strong> Instead of expensive feature space comparisons, OCL forces the model to pull prediction distributions of similar pixels closer while pushing dissimilar ones apart directly in the output space.&lt;/li>
&lt;li>&lt;strong>High Temperature &amp;amp; Stochastic Restoration:&lt;/strong> We use a high temperature coefficient to prevent overly sharp predictions and a &amp;ldquo;Stochastic Restoration&amp;rdquo; mechanism to randomly reset parameters, preventing catastrophic forgetting.&lt;/li>
&lt;/ol>
&lt;p>
&lt;figure id="figure-figure-1-a-visual-comparison-showing-how-ocl-acts-directly-and-efficiently-on-the-output-layer-to-separate-classes">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img alt="OCL vs Feature Space Contrast" srcset="
/resume.io/insights/research-on-robustness-and-generalization-learning-theory-in-pathological-image-analysis/fig2_hua7557e863b72d964a1012ebbe8e8195a_46944_f5b9f1b14636f9dca0abac83416ee99a.webp 400w,
/resume.io/insights/research-on-robustness-and-generalization-learning-theory-in-pathological-image-analysis/fig2_hua7557e863b72d964a1012ebbe8e8195a_46944_464c65aa079311fa765ca85566815eea.webp 760w,
/resume.io/insights/research-on-robustness-and-generalization-learning-theory-in-pathological-image-analysis/fig2_hua7557e863b72d964a1012ebbe8e8195a_46944_1200x1200_fit_q80_h2_lanczos_2.webp 1200w"
src="https://hzzcl.github.io/resume.io/resume.io/insights/research-on-robustness-and-generalization-learning-theory-in-pathological-image-analysis/fig2_hua7557e863b72d964a1012ebbe8e8195a_46944_f5b9f1b14636f9dca0abac83416ee99a.webp"
width="760"
height="223"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Figure 1: A visual comparison showing how OCL acts directly and efficiently on the output layer to separate classes.
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;h3 id="results-1">Results&lt;/h3>
&lt;p>On the GTA5 $\to$ Cityscapes semantic segmentation task:&lt;/p>
&lt;ul>
&lt;li>&lt;!-- raw HTML omitted -->Performance:&lt;!-- raw HTML omitted --> mIoU increased by &lt;strong>7.5%&lt;/strong> over the baseline (from 37.5% to 45.0%).&lt;/li>
&lt;li>&lt;!-- raw HTML omitted -->Efficiency:&lt;!-- raw HTML omitted --> Reduced VRAM usage by &lt;strong>2.1GB&lt;/strong> and cut adaptation time by &lt;strong>23 seconds&lt;/strong>, making deployment on clinical edge devices feasible.&lt;/li>
&lt;/ul>
&lt;div style="overflow-x: auto; display: block; width: 100%;">
&lt;table>
&lt;thead>
&lt;tr>
&lt;th style="text-align:left">Method&lt;/th>
&lt;th style="text-align:center">Setting&lt;/th>
&lt;th style="text-align:center">Road&lt;/th>
&lt;th style="text-align:center">Side.&lt;/th>
&lt;th style="text-align:center">Build.&lt;/th>
&lt;th style="text-align:center">Wall&lt;/th>
&lt;th style="text-align:center">Fence&lt;/th>
&lt;th style="text-align:center">Pole&lt;/th>
&lt;th style="text-align:center">Light&lt;/th>
&lt;th style="text-align:center">Sign&lt;/th>
&lt;th style="text-align:center">Veget.&lt;/th>
&lt;th style="text-align:center">Terr.&lt;/th>
&lt;th style="text-align:center">Sky&lt;/th>
&lt;th style="text-align:center">Pers.&lt;/th>
&lt;th style="text-align:center">Rider&lt;/th>
&lt;th style="text-align:center">Car&lt;/th>
&lt;th style="text-align:center">Truck&lt;/th>
&lt;th style="text-align:center">Bus&lt;/th>
&lt;th style="text-align:center">Train&lt;/th>
&lt;th style="text-align:center">Motor.&lt;/th>
&lt;th style="text-align:center">Bike&lt;/th>
&lt;th style="text-align:center">mIoU&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td style="text-align:left">&lt;strong>GTA→CS (Val.)&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">AdaptSegNet&lt;/td>
&lt;td style="text-align:center">DA&lt;/td>
&lt;td style="text-align:center">86.5&lt;/td>
&lt;td style="text-align:center">36.0&lt;/td>
&lt;td style="text-align:center">79.9&lt;/td>
&lt;td style="text-align:center">23.4&lt;/td>
&lt;td style="text-align:center">23.3&lt;/td>
&lt;td style="text-align:center">23.9&lt;/td>
&lt;td style="text-align:center">35.2&lt;/td>
&lt;td style="text-align:center">14.8&lt;/td>
&lt;td style="text-align:center">83.4&lt;/td>
&lt;td style="text-align:center">33.3&lt;/td>
&lt;td style="text-align:center">75.6&lt;/td>
&lt;td style="text-align:center">58.5&lt;/td>
&lt;td style="text-align:center">27.6&lt;/td>
&lt;td style="text-align:center">73.7&lt;/td>
&lt;td style="text-align:center">32.5&lt;/td>
&lt;td style="text-align:center">35.4&lt;/td>
&lt;td style="text-align:center">3.9&lt;/td>
&lt;td style="text-align:center">30.1&lt;/td>
&lt;td style="text-align:center">28.1&lt;/td>
&lt;td style="text-align:center">42.4&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">CLAN&lt;/td>
&lt;td style="text-align:center">DA&lt;/td>
&lt;td style="text-align:center">87.0&lt;/td>
&lt;td style="text-align:center">27.1&lt;/td>
&lt;td style="text-align:center">79.6&lt;/td>
&lt;td style="text-align:center">27.3&lt;/td>
&lt;td style="text-align:center">23.3&lt;/td>
&lt;td style="text-align:center">28.3&lt;/td>
&lt;td style="text-align:center">35.5&lt;/td>
&lt;td style="text-align:center">24.2&lt;/td>
&lt;td style="text-align:center">83.6&lt;/td>
&lt;td style="text-align:center">27.4&lt;/td>
&lt;td style="text-align:center">74.2&lt;/td>
&lt;td style="text-align:center">58.6&lt;/td>
&lt;td style="text-align:center">28.0&lt;/td>
&lt;td style="text-align:center">76.2&lt;/td>
&lt;td style="text-align:center">33.1&lt;/td>
&lt;td style="text-align:center">36.7&lt;/td>
&lt;td style="text-align:center">6.7&lt;/td>
&lt;td style="text-align:center">31.9&lt;/td>
&lt;td style="text-align:center">31.4&lt;/td>
&lt;td style="text-align:center">43.2&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">AdvEnt&lt;/td>
&lt;td style="text-align:center">DA&lt;/td>
&lt;td style="text-align:center">89.4&lt;/td>
&lt;td style="text-align:center">33.1&lt;/td>
&lt;td style="text-align:center">81.0&lt;/td>
&lt;td style="text-align:center">26.6&lt;/td>
&lt;td style="text-align:center">26.8&lt;/td>
&lt;td style="text-align:center">27.2&lt;/td>
&lt;td style="text-align:center">33.5&lt;/td>
&lt;td style="text-align:center">24.7&lt;/td>
&lt;td style="text-align:center">83.9&lt;/td>
&lt;td style="text-align:center">36.7&lt;/td>
&lt;td style="text-align:center">78.8&lt;/td>
&lt;td style="text-align:center">58.7&lt;/td>
&lt;td style="text-align:center">30.5&lt;/td>
&lt;td style="text-align:center">84.8&lt;/td>
&lt;td style="text-align:center">38.5&lt;/td>
&lt;td style="text-align:center">44.5&lt;/td>
&lt;td style="text-align:center">1.7&lt;/td>
&lt;td style="text-align:center">31.6&lt;/td>
&lt;td style="text-align:center">32.4&lt;/td>
&lt;td style="text-align:center">45.5&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">CBST&lt;/td>
&lt;td style="text-align:center">DA&lt;/td>
&lt;td style="text-align:center">91.8&lt;/td>
&lt;td style="text-align:center">53.5&lt;/td>
&lt;td style="text-align:center">80.5&lt;/td>
&lt;td style="text-align:center">32.7&lt;/td>
&lt;td style="text-align:center">21.0&lt;/td>
&lt;td style="text-align:center">34.0&lt;/td>
&lt;td style="text-align:center">28.9&lt;/td>
&lt;td style="text-align:center">20.4&lt;/td>
&lt;td style="text-align:center">83.9&lt;/td>
&lt;td style="text-align:center">34.2&lt;/td>
&lt;td style="text-align:center">80.9&lt;/td>
&lt;td style="text-align:center">53.1&lt;/td>
&lt;td style="text-align:center">24.0&lt;/td>
&lt;td style="text-align:center">82.7&lt;/td>
&lt;td style="text-align:center">30.3&lt;/td>
&lt;td style="text-align:center">35.9&lt;/td>
&lt;td style="text-align:center">16.0&lt;/td>
&lt;td style="text-align:center">25.9&lt;/td>
&lt;td style="text-align:center">42.8&lt;/td>
&lt;td style="text-align:center">45.9&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">DACS&lt;/td>
&lt;td style="text-align:center">DA&lt;/td>
&lt;td style="text-align:center">89.9&lt;/td>
&lt;td style="text-align:center">39.7&lt;/td>
&lt;td style="text-align:center">87.9&lt;/td>
&lt;td style="text-align:center">30.7&lt;/td>
&lt;td style="text-align:center">39.5&lt;/td>
&lt;td style="text-align:center">38.5&lt;/td>
&lt;td style="text-align:center">46.4&lt;/td>
&lt;td style="text-align:center">52.8&lt;/td>
&lt;td style="text-align:center">88.0&lt;/td>
&lt;td style="text-align:center">44.0&lt;/td>
&lt;td style="text-align:center">88.8&lt;/td>
&lt;td style="text-align:center">67.2&lt;/td>
&lt;td style="text-align:center">35.8&lt;/td>
&lt;td style="text-align:center">84.5&lt;/td>
&lt;td style="text-align:center">45.7&lt;/td>
&lt;td style="text-align:center">50.2&lt;/td>
&lt;td style="text-align:center">0.0&lt;/td>
&lt;td style="text-align:center">27.3&lt;/td>
&lt;td style="text-align:center">34.0&lt;/td>
&lt;td style="text-align:center">52.1&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">Source only&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">71.9&lt;/td>
&lt;td style="text-align:center">15.6&lt;/td>
&lt;td style="text-align:center">74.4&lt;/td>
&lt;td style="text-align:center">22.4&lt;/td>
&lt;td style="text-align:center">14.8&lt;/td>
&lt;td style="text-align:center">22.9&lt;/td>
&lt;td style="text-align:center">35.4&lt;/td>
&lt;td style="text-align:center">18.4&lt;/td>
&lt;td style="text-align:center">81.1&lt;/td>
&lt;td style="text-align:center">22.0&lt;/td>
&lt;td style="text-align:center">68.3&lt;/td>
&lt;td style="text-align:center">57.3&lt;/td>
&lt;td style="text-align:center">27.9&lt;/td>
&lt;td style="text-align:center">68.1&lt;/td>
&lt;td style="text-align:center">33.1&lt;/td>
&lt;td style="text-align:center">5.8&lt;/td>
&lt;td style="text-align:center">6.5&lt;/td>
&lt;td style="text-align:center">30.5&lt;/td>
&lt;td style="text-align:center">35.3&lt;/td>
&lt;td style="text-align:center">37.5&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">+TENT&lt;/td>
&lt;td style="text-align:center">TTT&lt;/td>
&lt;td style="text-align:center">71.5&lt;/td>
&lt;td style="text-align:center">22.6&lt;/td>
&lt;td style="text-align:center">76.9&lt;/td>
&lt;td style="text-align:center">20.0&lt;/td>
&lt;td style="text-align:center">17.1&lt;/td>
&lt;td style="text-align:center">21.6&lt;/td>
&lt;td style="text-align:center">29.2&lt;/td>
&lt;td style="text-align:center">15.3&lt;/td>
&lt;td style="text-align:center">78.4&lt;/td>
&lt;td style="text-align:center">33.9&lt;/td>
&lt;td style="text-align:center">75.3&lt;/td>
&lt;td style="text-align:center">50.8&lt;/td>
&lt;td style="text-align:center">3.5&lt;/td>
&lt;td style="text-align:center">80.9&lt;/td>
&lt;td style="text-align:center">29.5&lt;/td>
&lt;td style="text-align:center">31.7&lt;/td>
&lt;td style="text-align:center">4.3&lt;/td>
&lt;td style="text-align:center">13.7&lt;/td>
&lt;td style="text-align:center">2.1&lt;/td>
&lt;td style="text-align:center">35.7&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">+MEMO&lt;/td>
&lt;td style="text-align:center">TTT&lt;/td>
&lt;td style="text-align:center">82.6&lt;/td>
&lt;td style="text-align:center">0.1&lt;/td>
&lt;td style="text-align:center">68.0&lt;/td>
&lt;td style="text-align:center">0.0&lt;/td>
&lt;td style="text-align:center">0.2&lt;/td>
&lt;td style="text-align:center">1.3&lt;/td>
&lt;td style="text-align:center">1.7&lt;/td>
&lt;td style="text-align:center">0.2&lt;/td>
&lt;td style="text-align:center">78.3&lt;/td>
&lt;td style="text-align:center">0.3&lt;/td>
&lt;td style="text-align:center">82.3&lt;/td>
&lt;td style="text-align:center">1.3&lt;/td>
&lt;td style="text-align:center">0.3&lt;/td>
&lt;td style="text-align:center">77.9&lt;/td>
&lt;td style="text-align:center">6.2&lt;/td>
&lt;td style="text-align:center">1.8&lt;/td>
&lt;td style="text-align:center">0.0&lt;/td>
&lt;td style="text-align:center">0.8&lt;/td>
&lt;td style="text-align:center">0.1&lt;/td>
&lt;td style="text-align:center">21.3&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">+CoTTA&lt;/td>
&lt;td style="text-align:center">TTT&lt;/td>
&lt;td style="text-align:center">74.4&lt;/td>
&lt;td style="text-align:center">13.5&lt;/td>
&lt;td style="text-align:center">75.3&lt;/td>
&lt;td style="text-align:center">24.1&lt;/td>
&lt;td style="text-align:center">14.0&lt;/td>
&lt;td style="text-align:center">22.9&lt;/td>
&lt;td style="text-align:center">31.1&lt;/td>
&lt;td style="text-align:center">16.1&lt;/td>
&lt;td style="text-align:center">81.8&lt;/td>
&lt;td style="text-align:center">22.6&lt;/td>
&lt;td style="text-align:center">69.7&lt;/td>
&lt;td style="text-align:center">57.3&lt;/td>
&lt;td style="text-align:center">26.7&lt;/td>
&lt;td style="text-align:center">71.9&lt;/td>
&lt;td style="text-align:center">33.4&lt;/td>
&lt;td style="text-align:center">6.2&lt;/td>
&lt;td style="text-align:center">8.1&lt;/td>
&lt;td style="text-align:center">27.2&lt;/td>
&lt;td style="text-align:center">31.7&lt;/td>
&lt;td style="text-align:center">37.3&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">&lt;strong>+OCL&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>TTT&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>87.1&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>42.1&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>81.6&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>29.7&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>20.2&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>27.5&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>37.8&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>18.3&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>83.8&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>33.8&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>74.7&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>60.5&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>24.8&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>85.3&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>36.3&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>46.7&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>4.4&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>29.6&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>31.7&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>45.0&lt;/strong>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">FDA&lt;/td>
&lt;td style="text-align:center">DA&lt;/td>
&lt;td style="text-align:center">92.1&lt;/td>
&lt;td style="text-align:center">52.3&lt;/td>
&lt;td style="text-align:center">80.7&lt;/td>
&lt;td style="text-align:center">23.6&lt;/td>
&lt;td style="text-align:center">26.4&lt;/td>
&lt;td style="text-align:center">35.5&lt;/td>
&lt;td style="text-align:center">37.7&lt;/td>
&lt;td style="text-align:center">38.6&lt;/td>
&lt;td style="text-align:center">81.2&lt;/td>
&lt;td style="text-align:center">32.4&lt;/td>
&lt;td style="text-align:center">73.2&lt;/td>
&lt;td style="text-align:center">61.2&lt;/td>
&lt;td style="text-align:center">34.0&lt;/td>
&lt;td style="text-align:center">84.0&lt;/td>
&lt;td style="text-align:center">32.2&lt;/td>
&lt;td style="text-align:center">51.2&lt;/td>
&lt;td style="text-align:center">8.0&lt;/td>
&lt;td style="text-align:center">26.8&lt;/td>
&lt;td style="text-align:center">44.1&lt;/td>
&lt;td style="text-align:center">48.2&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">+OCL&lt;/td>
&lt;td style="text-align:center">+TTT&lt;/td>
&lt;td style="text-align:center">93.2&lt;/td>
&lt;td style="text-align:center">57.0&lt;/td>
&lt;td style="text-align:center">83.5&lt;/td>
&lt;td style="text-align:center">31.5&lt;/td>
&lt;td style="text-align:center">31.5&lt;/td>
&lt;td style="text-align:center">38.6&lt;/td>
&lt;td style="text-align:center">41.3&lt;/td>
&lt;td style="text-align:center">39.4&lt;/td>
&lt;td style="text-align:center">85.0&lt;/td>
&lt;td style="text-align:center">42.6&lt;/td>
&lt;td style="text-align:center">76.8&lt;/td>
&lt;td style="text-align:center">63.1&lt;/td>
&lt;td style="text-align:center">34.2&lt;/td>
&lt;td style="text-align:center">85.5&lt;/td>
&lt;td style="text-align:center">34.2&lt;/td>
&lt;td style="text-align:center">51.5&lt;/td>
&lt;td style="text-align:center">9.0&lt;/td>
&lt;td style="text-align:center">26.6&lt;/td>
&lt;td style="text-align:center">46.1&lt;/td>
&lt;td style="text-align:center">51.1&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">DAFormer&lt;/td>
&lt;td style="text-align:center">DA&lt;/td>
&lt;td style="text-align:center">96.5&lt;/td>
&lt;td style="text-align:center">74.0&lt;/td>
&lt;td style="text-align:center">89.5&lt;/td>
&lt;td style="text-align:center">53.4&lt;/td>
&lt;td style="text-align:center">47.7&lt;/td>
&lt;td style="text-align:center">50.6&lt;/td>
&lt;td style="text-align:center">54.7&lt;/td>
&lt;td style="text-align:center">63.6&lt;/td>
&lt;td style="text-align:center">90.0&lt;/td>
&lt;td style="text-align:center">44.4&lt;/td>
&lt;td style="text-align:center">92.6&lt;/td>
&lt;td style="text-align:center">71.8&lt;/td>
&lt;td style="text-align:center">44.8&lt;/td>
&lt;td style="text-align:center">92.6&lt;/td>
&lt;td style="text-align:center">77.8&lt;/td>
&lt;td style="text-align:center">80.6&lt;/td>
&lt;td style="text-align:center">63.6&lt;/td>
&lt;td style="text-align:center">56.7&lt;/td>
&lt;td style="text-align:center">63.4&lt;/td>
&lt;td style="text-align:center">68.8&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">+OCL&lt;/td>
&lt;td style="text-align:center">+TTT&lt;/td>
&lt;td style="text-align:center">96.6&lt;/td>
&lt;td style="text-align:center">74.7&lt;/td>
&lt;td style="text-align:center">89.6&lt;/td>
&lt;td style="text-align:center">53.5&lt;/td>
&lt;td style="text-align:center">48.1&lt;/td>
&lt;td style="text-align:center">51.3&lt;/td>
&lt;td style="text-align:center">55.3&lt;/td>
&lt;td style="text-align:center">64.0&lt;/td>
&lt;td style="text-align:center">90.0&lt;/td>
&lt;td style="text-align:center">44.5&lt;/td>
&lt;td style="text-align:center">92.5&lt;/td>
&lt;td style="text-align:center">72.3&lt;/td>
&lt;td style="text-align:center">45.4&lt;/td>
&lt;td style="text-align:center">92.8&lt;/td>
&lt;td style="text-align:center">78.6&lt;/td>
&lt;td style="text-align:center">81.4&lt;/td>
&lt;td style="text-align:center">66.8&lt;/td>
&lt;td style="text-align:center">59.0&lt;/td>
&lt;td style="text-align:center">64.0&lt;/td>
&lt;td style="text-align:center">69.5&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">&lt;strong>Synthia→CS (Val.)&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;td style="text-align:center">&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">AdvEnt&lt;/td>
&lt;td style="text-align:center">DA&lt;/td>
&lt;td style="text-align:center">85.6&lt;/td>
&lt;td style="text-align:center">42.2&lt;/td>
&lt;td style="text-align:center">79.7&lt;/td>
&lt;td style="text-align:center">8.7&lt;/td>
&lt;td style="text-align:center">0.4&lt;/td>
&lt;td style="text-align:center">25.9&lt;/td>
&lt;td style="text-align:center">5.4&lt;/td>
&lt;td style="text-align:center">8.1&lt;/td>
&lt;td style="text-align:center">80.4&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">84.1&lt;/td>
&lt;td style="text-align:center">57.9&lt;/td>
&lt;td style="text-align:center">23.8&lt;/td>
&lt;td style="text-align:center">73.3&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">36.4&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">14.2&lt;/td>
&lt;td style="text-align:center">33.0&lt;/td>
&lt;td style="text-align:center">41.2&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">CBST&lt;/td>
&lt;td style="text-align:center">DA&lt;/td>
&lt;td style="text-align:center">68.0&lt;/td>
&lt;td style="text-align:center">29.9&lt;/td>
&lt;td style="text-align:center">76.3&lt;/td>
&lt;td style="text-align:center">10.8&lt;/td>
&lt;td style="text-align:center">1.4&lt;/td>
&lt;td style="text-align:center">33.9&lt;/td>
&lt;td style="text-align:center">22.8&lt;/td>
&lt;td style="text-align:center">29.5&lt;/td>
&lt;td style="text-align:center">77.6&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">78.3&lt;/td>
&lt;td style="text-align:center">60.6&lt;/td>
&lt;td style="text-align:center">28.3&lt;/td>
&lt;td style="text-align:center">81.6&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">23.5&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">18.8&lt;/td>
&lt;td style="text-align:center">39.8&lt;/td>
&lt;td style="text-align:center">42.6&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">MRKLD&lt;/td>
&lt;td style="text-align:center">DA&lt;/td>
&lt;td style="text-align:center">67.7&lt;/td>
&lt;td style="text-align:center">32.2&lt;/td>
&lt;td style="text-align:center">73.9&lt;/td>
&lt;td style="text-align:center">10.7&lt;/td>
&lt;td style="text-align:center">1.6&lt;/td>
&lt;td style="text-align:center">37.4&lt;/td>
&lt;td style="text-align:center">22.2&lt;/td>
&lt;td style="text-align:center">31.2&lt;/td>
&lt;td style="text-align:center">80.8&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">80.5&lt;/td>
&lt;td style="text-align:center">60.8&lt;/td>
&lt;td style="text-align:center">29.1&lt;/td>
&lt;td style="text-align:center">82.8&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">25.0&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">19.4&lt;/td>
&lt;td style="text-align:center">45.3&lt;/td>
&lt;td style="text-align:center">43.8&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">DACS&lt;/td>
&lt;td style="text-align:center">DA&lt;/td>
&lt;td style="text-align:center">80.6&lt;/td>
&lt;td style="text-align:center">25.1&lt;/td>
&lt;td style="text-align:center">81.9&lt;/td>
&lt;td style="text-align:center">21.5&lt;/td>
&lt;td style="text-align:center">2.9&lt;/td>
&lt;td style="text-align:center">37.2&lt;/td>
&lt;td style="text-align:center">33.7&lt;/td>
&lt;td style="text-align:center">24.0&lt;/td>
&lt;td style="text-align:center">83.7&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">90.8&lt;/td>
&lt;td style="text-align:center">67.6&lt;/td>
&lt;td style="text-align:center">38.3&lt;/td>
&lt;td style="text-align:center">82.9&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">38.9&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">28.5&lt;/td>
&lt;td style="text-align:center">47.6&lt;/td>
&lt;td style="text-align:center">48.3&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">Source only&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">45.2&lt;/td>
&lt;td style="text-align:center">19.6&lt;/td>
&lt;td style="text-align:center">72.0&lt;/td>
&lt;td style="text-align:center">6.7&lt;/td>
&lt;td style="text-align:center">0.1&lt;/td>
&lt;td style="text-align:center">25.4&lt;/td>
&lt;td style="text-align:center">5.5&lt;/td>
&lt;td style="text-align:center">7.8&lt;/td>
&lt;td style="text-align:center">75.3&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">81.9&lt;/td>
&lt;td style="text-align:center">57.3&lt;/td>
&lt;td style="text-align:center">17.3&lt;/td>
&lt;td style="text-align:center">39.0&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">19.5&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">7.0&lt;/td>
&lt;td style="text-align:center">25.7&lt;/td>
&lt;td style="text-align:center">31.5&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">+TENT&lt;/td>
&lt;td style="text-align:center">TTT&lt;/td>
&lt;td style="text-align:center">38.1&lt;/td>
&lt;td style="text-align:center">18.9&lt;/td>
&lt;td style="text-align:center">57.5&lt;/td>
&lt;td style="text-align:center">1.1&lt;/td>
&lt;td style="text-align:center">0.2&lt;/td>
&lt;td style="text-align:center">24.7&lt;/td>
&lt;td style="text-align:center">7.1&lt;/td>
&lt;td style="text-align:center">9.0&lt;/td>
&lt;td style="text-align:center">74.5&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">81.4&lt;/td>
&lt;td style="text-align:center">47.0&lt;/td>
&lt;td style="text-align:center">17.0&lt;/td>
&lt;td style="text-align:center">67.7&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">8.6&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">5.9&lt;/td>
&lt;td style="text-align:center">29.7&lt;/td>
&lt;td style="text-align:center">30.5&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">+MEMO&lt;/td>
&lt;td style="text-align:center">TTT&lt;/td>
&lt;td style="text-align:center">63.9&lt;/td>
&lt;td style="text-align:center">0.7&lt;/td>
&lt;td style="text-align:center">65.4&lt;/td>
&lt;td style="text-align:center">0.0&lt;/td>
&lt;td style="text-align:center">0.0&lt;/td>
&lt;td style="text-align:center">2.1&lt;/td>
&lt;td style="text-align:center">0.3&lt;/td>
&lt;td style="text-align:center">0.3&lt;/td>
&lt;td style="text-align:center">66.4&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">78.1&lt;/td>
&lt;td style="text-align:center">6.7&lt;/td>
&lt;td style="text-align:center">0.5&lt;/td>
&lt;td style="text-align:center">15.5&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">0.8&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">0.5&lt;/td>
&lt;td style="text-align:center">0.1&lt;/td>
&lt;td style="text-align:center">19.0&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">+CoTTA&lt;/td>
&lt;td style="text-align:center">TTT&lt;/td>
&lt;td style="text-align:center">48.5&lt;/td>
&lt;td style="text-align:center">20.8&lt;/td>
&lt;td style="text-align:center">73.1&lt;/td>
&lt;td style="text-align:center">8.4&lt;/td>
&lt;td style="text-align:center">0.2&lt;/td>
&lt;td style="text-align:center">24.3&lt;/td>
&lt;td style="text-align:center">12.6&lt;/td>
&lt;td style="text-align:center">11.0&lt;/td>
&lt;td style="text-align:center">76.0&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">82.2&lt;/td>
&lt;td style="text-align:center">56.6&lt;/td>
&lt;td style="text-align:center">17.3&lt;/td>
&lt;td style="text-align:center">40.2&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">21.1&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">9.2&lt;/td>
&lt;td style="text-align:center">27.7&lt;/td>
&lt;td style="text-align:center">33.0&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">&lt;strong>+OCL&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>TTT&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>66.6&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>27.5&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>78.8&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>8.0&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>0.2&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>29.0&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>8.1&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>11.3&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>80.1&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>-&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>82.4&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>55.9&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>16.5&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>58.9&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>-&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>28.3&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>-&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>11.8&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>28.4&lt;/strong>&lt;/td>
&lt;td style="text-align:center">&lt;strong>36.9&lt;/strong>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">FDA&lt;/td>
&lt;td style="text-align:center">DA&lt;/td>
&lt;td style="text-align:center">76.2&lt;/td>
&lt;td style="text-align:center">33.3&lt;/td>
&lt;td style="text-align:center">74.8&lt;/td>
&lt;td style="text-align:center">8.3&lt;/td>
&lt;td style="text-align:center">0.3&lt;/td>
&lt;td style="text-align:center">32.2&lt;/td>
&lt;td style="text-align:center">19.8&lt;/td>
&lt;td style="text-align:center">24.5&lt;/td>
&lt;td style="text-align:center">62.6&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">83.8&lt;/td>
&lt;td style="text-align:center">58.2&lt;/td>
&lt;td style="text-align:center">27.3&lt;/td>
&lt;td style="text-align:center">82.2&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">40.3&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">31.5&lt;/td>
&lt;td style="text-align:center">45.1&lt;/td>
&lt;td style="text-align:center">43.8&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">+OCL&lt;/td>
&lt;td style="text-align:center">+TTT&lt;/td>
&lt;td style="text-align:center">78.0&lt;/td>
&lt;td style="text-align:center">33.8&lt;/td>
&lt;td style="text-align:center">78.9&lt;/td>
&lt;td style="text-align:center">10.9&lt;/td>
&lt;td style="text-align:center">0.3&lt;/td>
&lt;td style="text-align:center">34.1&lt;/td>
&lt;td style="text-align:center">21.9&lt;/td>
&lt;td style="text-align:center">26.1&lt;/td>
&lt;td style="text-align:center">75.7&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">84.8&lt;/td>
&lt;td style="text-align:center">60.8&lt;/td>
&lt;td style="text-align:center">28.6&lt;/td>
&lt;td style="text-align:center">84.3&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">43.1&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">32.5&lt;/td>
&lt;td style="text-align:center">45.3&lt;/td>
&lt;td style="text-align:center">46.2&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">DAFormer&lt;/td>
&lt;td style="text-align:center">DA&lt;/td>
&lt;td style="text-align:center">82.2&lt;/td>
&lt;td style="text-align:center">37.2&lt;/td>
&lt;td style="text-align:center">88.6&lt;/td>
&lt;td style="text-align:center">42.9&lt;/td>
&lt;td style="text-align:center">8.5&lt;/td>
&lt;td style="text-align:center">50.1&lt;/td>
&lt;td style="text-align:center">55.1&lt;/td>
&lt;td style="text-align:center">54.3&lt;/td>
&lt;td style="text-align:center">85.7&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">88.0&lt;/td>
&lt;td style="text-align:center">73.6&lt;/td>
&lt;td style="text-align:center">48.6&lt;/td>
&lt;td style="text-align:center">87.6&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">62.8&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">53.1&lt;/td>
&lt;td style="text-align:center">62.4&lt;/td>
&lt;td style="text-align:center">61.3&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">+OCL&lt;/td>
&lt;td style="text-align:center">+TTT&lt;/td>
&lt;td style="text-align:center">81.6&lt;/td>
&lt;td style="text-align:center">36.5&lt;/td>
&lt;td style="text-align:center">88.7&lt;/td>
&lt;td style="text-align:center">43.1&lt;/td>
&lt;td style="text-align:center">8.4&lt;/td>
&lt;td style="text-align:center">50.8&lt;/td>
&lt;td style="text-align:center">55.8&lt;/td>
&lt;td style="text-align:center">55.1&lt;/td>
&lt;td style="text-align:center">86.2&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">88.4&lt;/td>
&lt;td style="text-align:center">74.2&lt;/td>
&lt;td style="text-align:center">49.5&lt;/td>
&lt;td style="text-align:center">87.8&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">63.2&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">54.5&lt;/td>
&lt;td style="text-align:center">62.8&lt;/td>
&lt;td style="text-align:center">61.7&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;/div>
&lt;hr>
&lt;h2 id="3-unsupervised-cell-recognition-with-prior-self-activation-maps-psm">3. Unsupervised Cell Recognition with Prior Self-Activation Maps (PSM)&lt;/h2>
&lt;p>&lt;strong>Original Paper:&lt;/strong> &lt;em>Exploring Unsupervised Cell Recognition with Prior Self-activation Maps&lt;/em> (MICCAI 2023)
&lt;!-- raw HTML omitted -->&lt;strong>Authors:&lt;/strong> Pingyi Chen, Chenglu Zhu, Zhongyi Shui, Jiatong Cai, Sunyi Zheng, Shichuan Zhang, Lin Yang&lt;/p>
&lt;h3 id="the-challenge-2">The Challenge&lt;/h3>
&lt;p>Pathology images contain dense cell structures, making pixel-level manual annotation prohibitively expensive. We investigated a core question: &lt;em>Can deep networks spontaneously locate and segment cells without any human labels?&lt;/em>&lt;/p>
&lt;h3 id="our-approach-2">Our Approach&lt;/h3>
&lt;p>We found that shallow layers of self-supervised pre-trained networks inherently contain rich morphological cues. We developed the &lt;strong>Prior Self-activation Maps (PSM)&lt;/strong> framework:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Gradient Aggregation:&lt;/strong> Extracts and aggregates gradient information from shallow layers to generate an initial Class Activation Map (CAM).&lt;/li>
&lt;li>&lt;strong>Semantic Clustering Module (SCM):&lt;/strong> Uses K-Means clustering to refine these activation maps into high-quality &amp;ldquo;Pseudo Masks&amp;rdquo; for training downstream networks.&lt;/li>
&lt;/ul>
&lt;p>
&lt;figure id="figure-figure-1-the-psm-pipeline-gradient-extraction---activation-map-generation---semantic-clustering---training-downstream-networks">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img alt="PSM Framework" srcset="
/resume.io/insights/research-on-robustness-and-generalization-learning-theory-in-pathological-image-analysis/fig3_hu053fa0d290ca62fd138f81704d2e11a4_21350_a8fabf51609ab7c6f4353aeb257dd21c.webp 400w,
/resume.io/insights/research-on-robustness-and-generalization-learning-theory-in-pathological-image-analysis/fig3_hu053fa0d290ca62fd138f81704d2e11a4_21350_3b91a4a610fa061c708d81dbdffa6ac6.webp 760w,
/resume.io/insights/research-on-robustness-and-generalization-learning-theory-in-pathological-image-analysis/fig3_hu053fa0d290ca62fd138f81704d2e11a4_21350_1200x1200_fit_q80_h2_lanczos_2.webp 1200w"
src="https://hzzcl.github.io/resume.io/resume.io/insights/research-on-robustness-and-generalization-learning-theory-in-pathological-image-analysis/fig3_hu053fa0d290ca62fd138f81704d2e11a4_21350_a8fabf51609ab7c6f4353aeb257dd21c.webp"
width="688"
height="250"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Figure 1: The PSM pipeline: Gradient extraction -&amp;gt; Activation map generation -&amp;gt; Semantic clustering -&amp;gt; Training downstream networks.
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;h3 id="results-2">Results&lt;/h3>
&lt;p>Achieved performance close to supervised methods &lt;strong>without using a single manual annotation&lt;/strong>:&lt;/p>
&lt;p>&lt;div style="overflow-x: auto; display: block; width: 100%;">
&lt;table>
&lt;thead>
&lt;tr>
&lt;th style="text-align:left">Methods&lt;/th>
&lt;th style="text-align:center">Loc&lt;/th>
&lt;th style="text-align:center">Cnt&lt;/th>
&lt;th style="text-align:center">Pixel-level IoU&lt;/th>
&lt;th style="text-align:center">Pixel-level F1&lt;/th>
&lt;th style="text-align:center">Object-level Dice&lt;/th>
&lt;th style="text-align:center">Object-level AJI&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td style="text-align:left">Unet* [20]&lt;/td>
&lt;td style="text-align:center">✔&lt;/td>
&lt;td style="text-align:center">✔&lt;/td>
&lt;td style="text-align:center">0.606&lt;/td>
&lt;td style="text-align:center">0.745&lt;/td>
&lt;td style="text-align:center">0.715&lt;/td>
&lt;td style="text-align:center">0.511&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">MedT [24]&lt;/td>
&lt;td style="text-align:center">✔&lt;/td>
&lt;td style="text-align:center">✔&lt;/td>
&lt;td style="text-align:center">0.662&lt;/td>
&lt;td style="text-align:center">0.795&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">CDNet [8]&lt;/td>
&lt;td style="text-align:center">✔&lt;/td>
&lt;td style="text-align:center">✔&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">0.832&lt;/td>
&lt;td style="text-align:center">0.633&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">Competition Winner [13]&lt;/td>
&lt;td style="text-align:center">✔&lt;/td>
&lt;td style="text-align:center">✔&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">0.691&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">Qu et al. [19]&lt;/td>
&lt;td style="text-align:center">✔&lt;/td>
&lt;td style="text-align:center">✘&lt;/td>
&lt;td style="text-align:center">0.579&lt;/td>
&lt;td style="text-align:center">0.732&lt;/td>
&lt;td style="text-align:center">0.702&lt;/td>
&lt;td style="text-align:center">0.496&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">Tian et al. [22]&lt;/td>
&lt;td style="text-align:center">✔&lt;/td>
&lt;td style="text-align:center">✘&lt;/td>
&lt;td style="text-align:center">0.624&lt;/td>
&lt;td style="text-align:center">0.764&lt;/td>
&lt;td style="text-align:center">0.713&lt;/td>
&lt;td style="text-align:center">0.493&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">CellProfiler [1]&lt;/td>
&lt;td style="text-align:center">✘&lt;/td>
&lt;td style="text-align:center">✘&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">0.404&lt;/td>
&lt;td style="text-align:center">0.597&lt;/td>
&lt;td style="text-align:center">0.123&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">Fiji [21]&lt;/td>
&lt;td style="text-align:center">✘&lt;/td>
&lt;td style="text-align:center">✘&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">0.665&lt;/td>
&lt;td style="text-align:center">0.649&lt;/td>
&lt;td style="text-align:center">0.273&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">CyCADA [9]&lt;/td>
&lt;td style="text-align:center">✘&lt;/td>
&lt;td style="text-align:center">✘&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">0.705&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">0.472&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">Hou et al. [10]&lt;/td>
&lt;td style="text-align:center">✘&lt;/td>
&lt;td style="text-align:center">✘&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">0.750&lt;/td>
&lt;td style="text-align:center">-&lt;/td>
&lt;td style="text-align:center">0.498&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align:left">&lt;strong>Ours&lt;/strong>&lt;/td>
&lt;td style="text-align:center">✘&lt;/td>
&lt;td style="text-align:center">✘&lt;/td>
&lt;td style="text-align:center">0.610&lt;/td>
&lt;td style="text-align:center">&lt;strong>0.762&lt;/strong>&lt;/td>
&lt;td style="text-align:center">0.724&lt;/td>
&lt;td style="text-align:center">&lt;strong>0.542&lt;/strong>&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;/div>
&lt;div style="text-align: center; font-size: 0.6em; color: #555; margin-top: 5px;">
Table 3: Results on MoNuSeg. Loc: Localization, Cnt: Contour. * indicates the model is trained from scratch with the same hyperparameter as ours.
&lt;/div>&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Detection:&lt;/strong> &lt;strong>0.811&lt;/strong> F1-score on the BCData dataset.&lt;/li>
&lt;li>&lt;strong>Segmentation:&lt;/strong> AJI score of &lt;strong>0.542&lt;/strong> on the MoNuSeg dataset.&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="4-robustness-benchmarking-for-pathological-foundation-models">4. Robustness Benchmarking for Pathological Foundation Models&lt;/h2>
&lt;p>&lt;strong>Original Paper:&lt;/strong> &lt;em>Benchmarking PathCLIP for Pathology Image Analysis&lt;/em> (JIIM 2025)
&lt;!-- raw HTML omitted -->&lt;strong>Authors:&lt;/strong> Sunyi Zheng, Xiaonan Cui, Yuxuan Sun, Jingxiong Li, Honglin Li, Yunlong Zhang, Pingyi Chen, Xueping Jing, Zhaoxiang Ye, Lin Yang&lt;/p>
&lt;h3 id="the-challenge-3">The Challenge&lt;/h3>
&lt;p>With the rise of foundation models like PathCLIP, their resilience to real-world interference (color casts, blur, marker pen strokes) remains largely untested.&lt;/p>
&lt;h3 id="our-work--key-findings">Our Work &amp;amp; Key Findings&lt;/h3>
&lt;p>We established a standardized benchmark covering &lt;strong>11 types&lt;/strong> of clinical image corruptions. Stress-testing multiple models (OpenAI-CLIP, PLIP, PathCLIP) revealed critical vulnerabilities.&lt;/p>
&lt;!-- raw HTML omitted -->
&lt;p>This finding emphasizes the necessity of rigorous pre-processing steps, such as stain normalization and digital marker removal, before clinical deployment.&lt;/p>
&lt;hr>
&lt;h2 id="summary">Summary&lt;/h2>
&lt;p>This phase of our research tackles the &amp;ldquo;imperfect data&amp;rdquo; and &amp;ldquo;inconsistent environment&amp;rdquo; problems in Pathological AI at a theoretical level.&lt;/p>
&lt;ul>
&lt;li>&lt;strong>DAR&lt;/strong> resolves training conflicts in mixed-bias scenarios.&lt;/li>
&lt;li>&lt;strong>Stable TTT&lt;/strong> enables low-cost cross-center adaptation.&lt;/li>
&lt;li>&lt;strong>PSM&lt;/strong> proves the potential of unsupervised learning for cell analysis.&lt;/li>
&lt;li>&lt;strong>Robustness Benchmark&lt;/strong> defines the safety boundaries for foundation models.&lt;/li>
&lt;/ul>
&lt;p>Together, these works form the theoretical cornerstone for the next generation of robust pathological AI systems.&lt;/p></description></item><item><title>Reconstructing Computational Paradigms for Pathological Image Analysis</title><link>https://hzzcl.github.io/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/</link><pubDate>Tue, 21 May 2024 00:00:00 +0000</pubDate><guid>https://hzzcl.github.io/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/</guid><description>&lt;p>The gigapixel scale of Whole Slide Images (WSI), the chronic absence of clinical multimodal data, and the &amp;ldquo;compute wall&amp;rdquo; for fine-tuning large models constitute the &amp;ldquo;Three Major Hurdles&amp;rdquo; restricting the development of high-precision pathological AI.&lt;/p>
&lt;p>This project reconstructs the computational paradigm of pathological image analysis across three dimensions: &lt;strong>low-rank architectural breakthroughs&lt;/strong>, &lt;strong>robust fusion mechanisms for missing modalities&lt;/strong>, and &lt;strong>task-specific efficient fine-tuning&lt;/strong>.&lt;/p>
&lt;h2 id="1-breaking-the-low-rank-bottleneck-in-long-sequences-longmil">1. Breaking the &amp;ldquo;Low-Rank&amp;rdquo; Bottleneck in Long Sequences (LongMIL)&lt;/h2>
&lt;p>&lt;strong>Original Paper:&lt;/strong> &lt;em>Rethinking Transformer for Long Contextual Histopathology Whole Slide Image Analysis&lt;/em> (NeurIPS 2024)
&lt;!-- raw HTML omitted -->&lt;strong>Authors:&lt;/strong> Honglin Li, Yunlong Zhang, Pingyi Chen, Zhongyi Shui, Chenglu Zhu, Lin Yang&lt;/p>
&lt;h3 id="the-scientific-question-the-transformers-achilles-heel-in-wsi">The Scientific Question: The Transformer&amp;rsquo;s &amp;ldquo;Achilles&amp;rsquo; Heel&amp;rdquo; in WSI&lt;/h3>
&lt;p>When processing WSIs containing tens of thousands of patches, traditional Transformers face two critical challenges:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Explosive Complexity:&lt;/strong> The $O(N^2)$ complexity of standard Self-Attention makes memory usage unsustainable.&lt;/li>
&lt;li>&lt;strong>Low-Rank Bottleneck:&lt;/strong> We theoretically revealed that when sequence length $N$ far exceeds embedding dimension $D$, the attention matrix exhibits mathematical &amp;ldquo;low-rank&amp;rdquo; properties. This means attention maps become homogenized, failing to capture fine-grained local microenvironmental differences.&lt;/li>
&lt;/ol>
&lt;h3 id="core-method-local-global-hybrid-attention">Core Method: Local-Global Hybrid Attention&lt;/h3>
&lt;p>To break the rank limit and reduce computation, we propose the &lt;strong>LongMIL&lt;/strong> architecture:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Local Attention Mask:&lt;/strong> By introducing local window constraints, we force the model to focus on interactions within local neighborhoods. Theory proves this sparsification significantly increases the &lt;strong>Rank&lt;/strong> of the attention matrix.&lt;/li>
&lt;li>&lt;strong>Linear Complexity:&lt;/strong> Utilizing a Chunked Computation strategy reduces complexity from quadratic $O(N^2)$ to linear $O(N \times w)$ (where $w$ is window size).&lt;/li>
&lt;li>&lt;strong>Dual-Stream Architecture:&lt;/strong> A &amp;ldquo;Local-First, Global-Second&amp;rdquo; design captures cell community features before aggregating slide-level information.&lt;/li>
&lt;/ul>
&lt;p>
&lt;figure id="figure-figure-3-the-longmil-framework-stage-1-prepares-features-stage-2-uses-local-masks-for-accelerated-attention-overall-stage-models-the-hierarchy-from-local-to-global">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img alt="LongMIL Architecture" srcset="
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig1_hu711c45aa60dabc8ce01810a864ea0a33_66080_47d4ff0e2529758e5b8e45df0157b493.webp 400w,
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig1_hu711c45aa60dabc8ce01810a864ea0a33_66080_99a9d307123b980f35063cc476536b62.webp 760w,
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig1_hu711c45aa60dabc8ce01810a864ea0a33_66080_1200x1200_fit_q80_h2_lanczos_2.webp 1200w"
src="https://hzzcl.github.io/resume.io/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig1_hu711c45aa60dabc8ce01810a864ea0a33_66080_47d4ff0e2529758e5b8e45df0157b493.webp"
width="760"
height="508"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Figure 3: The LongMIL framework: Stage-1 prepares features; Stage-2 uses Local Masks for accelerated attention; Overall-stage models the hierarchy from local to global.
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;h3 id="results">Results&lt;/h3>
&lt;p>&lt;strong>Table1&lt;/strong>&lt;/p>
&lt;div style="overflow-x: auto; display: block; width: 100%;">
Method,ViT-S Lunit [36] F1,ViT-S Lunit [36] AUC,ViT-S DINO (our pre-train) F1,ViT-S DINO (our pre-train) AUC
KNN (Mean),0.503 ± 0.011,0.691 ± 0.007,0.430 ± 0.029,0.649 ± 0.008
KNN (Max),0.472 ± 0.009,0.771 ± 0.018,0.416 ± 0.019,0.645 ± 0.007
Mean-pooling,0.534 ± 0.026,0.741 ± 0.017,0.487 ± 0.034,0.717 ± 0.020
Max-pooling,0.649 ± 0.032,0.843 ± 0.018,0.598 ± 0.032,0.818 ± 0.006
AB-MIL [32],0.668 ± 0.032,0.866 ± 0.016,0.621 ± 0.048,0.837 ± 0.035
DS-MIL [40],0.607 ± 0.044,0.824 ± 0.028,0.622 ± 0.063,0.808 ± 0.033
CLAM-SB [50],0.647 ± 0.020,0.836 ± 0.021,0.627 ± 0.032,0.836 ± 0.009
DTFD-MIL MaxS [89],0.597 ± 0.025,0.874 ± 0.026,0.521 ± 0.059,0.807 ± 0.016
DTFD-MIL AFS [89],0.608 ± 0.083,0.869 ± 0.018,0.538 ± 0.053,0.824 ± 0.011
TransMIL [65],0.648 ± 0.054,0.835 ± 0.031,0.591 ± 0.049,0.798 ± 0.029
Full Attention,0.689 ± 0.036,0.870 ± 0.010,0.648 ± 0.028,0.839 ± 0.018
LongMIL (ours),0.706 ± 0.025,0.888 ± 0.019,0.657 ± 0.026,0.848 ± 0.004
&lt;/div>
&lt;div style="text-align: center; font-size: 0.6em; color: #555; margin-top: 5px;">
Table 1: Slide-Level Survival Prediction based on HIPT [9] pre-trained embedding with variousWSI-MIL architectures including vanilla attention, GCN, TransllL, self-attention (HIPT with regionslicing and absolute embedding), full self-attention and our LongMl.
&lt;/div>
&lt;p>&lt;strong>Table 2&lt;/strong>
&lt;div style="overflow-x: auto; display: block; width: 100%;">
Method,COADREAD,STAD,BRCA
AB-MIL [32],0.566 ± 0.075,0.562 ± 0.049,0.549 ± 0.057
AMISL [86],0.561 ± 0.088,0.563 ± 0.067,0.545 ± 0.071
DS-MIL [40],0.470 ± 0.053,0.546 ± 0.047,0.548 ± 0.058
GCN-MIL [43],0.538 ± 0.049,0.513 ± 0.069,-
HIPT [9],&lt;!-- raw HTML omitted -->0.608 ± 0.088&lt;!-- raw HTML omitted -->,0.570 ± 0.081,-
TransMIL [65],0.597 ± 0.134,0.564 ± 0.080,0.587 ± 0.063
Full Attention,0.603 ± 0.048,&lt;!-- raw HTML omitted -->0.568 ± 0.074&lt;!-- raw HTML omitted -->,&lt;!-- raw HTML omitted -->0.601 ± 0.047&lt;!-- raw HTML omitted -->
LongMIL (ours),0.624 ± 0.057,0.589 ± 0.066,0.619 ± 0.053
&lt;/div>&lt;/p>
&lt;div style="text-align: center; font-size: 0.6em; color: #555; margin-top: 5px;">
Table 2: Slide-Level Tumor Subtyping on BRACS by using two pre-trained embeddings. Top Rows.Various WSI-MI architectures with vanilla attention (no interaction among different instances)Bottom Rows. TransMlL, (using Nyströmformer and learnable absolute position embedding), fullattention (+RoPE) and our LongMIL.
&lt;/div>
&lt;p>On &lt;strong>BRACS&lt;/strong> and &lt;strong>TCGA-BRCA&lt;/strong> datasets:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Performance:&lt;/strong> F1-score reached &lt;strong>0.657&lt;/strong> on BRACS tumor typing, significantly outperforming SOTA methods like TransMIL.&lt;/li>
&lt;li>&lt;strong>Extrapolation:&lt;/strong> In &amp;ldquo;train small, test large&amp;rdquo; experiments, LongMIL showed strong robustness (p-value $\approx$ 0.1), proving its adaptability to varying WSI sizes.&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="2-feature-mining-under-weak-supervision-attention-challenging-mil-acmil">2. Feature Mining under Weak Supervision: Attention-Challenging MIL (ACMIL)&lt;/h2>
&lt;p>&lt;strong>Original Paper:&lt;/strong> &lt;em>Attention-Challenging Multiple Instance Learning for Whole Slide Image Classification&lt;/em> (ECCV 2024)
&lt;!-- raw HTML omitted -->&lt;strong>Authors:&lt;/strong> Yunlong Zhang, Honglin Li, Yunxuan Sun, Sunyi Zheng, Chenglu Zhu, Lin Yang&lt;/p>
&lt;h3 id="the-scientific-question-attention-laziness">The Scientific Question: Attention &amp;ldquo;Laziness&amp;rdquo;&lt;/h3>
&lt;p>In Weakly Supervised Multiple Instance Learning (MIL), models tend to focus only on the most obvious discriminative regions (e.g., tumor cores), ignoring edges or atypical key features. This &amp;ldquo;Attention Laziness&amp;rdquo; leads to poor generalization on heterogeneous tumors.&lt;/p>
&lt;p>
&lt;figure id="figure-figure-3-motivation-of-mba-left--figure-4-motivation-of-stkim">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img alt="ACMIL Comparison" srcset="
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig2_hu3e5b7b25a6d59df4392e632fa46e4b39_56758_8f8a9c68d8e0fd7523c33eaa47f89137.webp 400w,
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig2_hu3e5b7b25a6d59df4392e632fa46e4b39_56758_96e810c97dde816de711fbff4bfac396.webp 760w,
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig2_hu3e5b7b25a6d59df4392e632fa46e4b39_56758_1200x1200_fit_q80_h2_lanczos_2.webp 1200w"
src="https://hzzcl.github.io/resume.io/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig2_hu3e5b7b25a6d59df4392e632fa46e4b39_56758_8f8a9c68d8e0fd7523c33eaa47f89137.webp"
width="760"
height="437"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Figure 3: Motivation of MBA (left) &amp;amp; Figure 4: Motivation of STKIM.
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;h3 id="core-method-adversarial-attention-enhancement">Core Method: Adversarial Attention Enhancement&lt;/h3>
&lt;p>We propose the &lt;strong>ACMIL&lt;/strong> framework to &amp;ldquo;manufacture difficulty&amp;rdquo; for the model:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Multi-Branch Attention (MBA):&lt;/strong> Parallel attention branches capture distinct clustering patterns in the feature space (verified via UMAP), covering more diverse pathological features.&lt;/li>
&lt;li>&lt;strong>Stochastic Top-K Instance Masking (STKIM):&lt;/strong> During training, we randomly &amp;ldquo;mask&amp;rdquo; the Top-K instances with the highest attention scores.&lt;/li>
&lt;/ul>
&lt;!-- raw HTML omitted -->
&lt;p>
&lt;figure id="figure-figure-6-heatmap-comparison-showing-acmil-right-covering-broader-tumor-regions-than-the-baseline-left">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img alt="ACMIL Comparison" srcset="
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig3_hu3e307508bcfe60c6dd042fed8e982faf_76954_450ec0f6de286e3cf69dec583944ad5e.webp 400w,
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig3_hu3e307508bcfe60c6dd042fed8e982faf_76954_3e8a1d3b823477a55556f93c8a427298.webp 760w,
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig3_hu3e307508bcfe60c6dd042fed8e982faf_76954_1200x1200_fit_q80_h2_lanczos_2.webp 1200w"
src="https://hzzcl.github.io/resume.io/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig3_hu3e307508bcfe60c6dd042fed8e982faf_76954_450ec0f6de286e3cf69dec583944ad5e.webp"
width="760"
height="563"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Figure 6: Heatmap comparison showing ACMIL (right) covering broader tumor regions than the baseline (left).
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;h3 id="results-1">Results&lt;/h3>
&lt;ul>
&lt;li>&lt;strong>Camelyon16:&lt;/strong> Achieved an AUC of &lt;strong>0.954&lt;/strong>, outperforming methods like DTFD-MIL.&lt;/li>
&lt;li>&lt;strong>TCGA-LBC:&lt;/strong> AUC increased to &lt;strong>0.901&lt;/strong> on liquid-based cytology data, proving effectiveness in sparse feature mining.&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="3-addressing-missing-clinical-data-bidirectional-distillation">3. Addressing Missing Clinical Data: Bidirectional Distillation&lt;/h2>
&lt;p>&lt;strong>Original Paper:&lt;/strong> &lt;em>Multi-modal Learning with Missing Modality in Predicting Axillary Lymph Node Metastasis&lt;/em> (BIBM 2023)
&lt;!-- raw HTML omitted -->&lt;strong>Authors:&lt;/strong> Shichuan Zhang, Sunyi Zheng, Zhongyi Shui, Honglin Li, Lin Yang&lt;/p>
&lt;h3 id="the-scientific-question-the-multimodal-bucket-effect">The Scientific Question: The Multimodal &amp;ldquo;Bucket Effect&amp;rdquo;&lt;/h3>
&lt;p>In clinical practice, WSI and tabular data (genomics, clinical markers) are often asynchronous. Existing multimodal models often suffer a severe performance drop—sometimes below single-modal baselines—when clinical data is missing.&lt;/p>
&lt;h3 id="core-method-bidirectional-distillation--learnable-prompts">Core Method: Bidirectional Distillation &amp;amp; Learnable Prompts&lt;/h3>
&lt;p>We propose a &lt;strong>Bidirectional Distillation (BD)&lt;/strong> framework to teach the model how to handle missingness:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Decoupling:&lt;/strong> Parallel &amp;ldquo;Single-Modal Branch&amp;rdquo; (WSI only) and &amp;ldquo;Multi-Modal Branch&amp;rdquo; (WSI + Clinical).&lt;/li>
&lt;li>&lt;strong>Learnable Prompt:&lt;/strong> A learnable vector acts as a placeholder for missing modalities in the single-modal branch.&lt;/li>
&lt;li>&lt;strong>Bidirectional Distillation:&lt;/strong> We distill fused knowledge from Multi $\to$ Single ($\mathcal{M} \to \mathcal{S}$) and distill pure image features back from Single $\to$ Multi ($\mathcal{S} \to \mathcal{M}$) to prevent noise interference.&lt;/li>
&lt;/ul>
&lt;p>
&lt;figure id="figure-figure-2-the-bd-structure-showing-parallel-branches-the-learnable-prompt-and-the-bidirectional-distillation-loss-paths">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img alt="BD Framework" srcset="
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig4_hu78c89133f1d28c40cec6aab15382bb0b_32450_08bd870a8b0d966025e8bd134ac32c74.webp 400w,
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig4_hu78c89133f1d28c40cec6aab15382bb0b_32450_46d9ad27adeddddf8180b35634840629.webp 760w,
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig4_hu78c89133f1d28c40cec6aab15382bb0b_32450_1200x1200_fit_q80_h2_lanczos_2.webp 1200w"
src="https://hzzcl.github.io/resume.io/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig4_hu78c89133f1d28c40cec6aab15382bb0b_32450_08bd870a8b0d966025e8bd134ac32c74.webp"
width="760"
height="390"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Figure 2: The BD structure showing parallel branches, the Learnable Prompt, and the bidirectional distillation loss paths.
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;h3 id="results-2">Results&lt;/h3>
&lt;p>In BCNB Breast Cancer Lymph Node Metastasis prediction:&lt;/p>
&lt;ul>
&lt;li>&lt;!-- raw HTML omitted -->Resilience:&lt;!-- raw HTML omitted --> With &lt;strong>80%-100%&lt;/strong> clinical data missing, BD maintained an F1-score of &lt;strong>~74.9%&lt;/strong>, while direct filling methods crashed to below 68%.&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="4-low-cost-wsi-adaptation-variational-information-bottleneck-fine-tuning">4. Low-Cost WSI Adaptation: Variational Information Bottleneck Fine-tuning&lt;/h2>
&lt;p>&lt;strong>Original Paper:&lt;/strong> &lt;em>Task-specific Fine-tuning via Variational Information Bottleneck for Weakly-supervised Pathology Whole Slide Image Classification&lt;/em> (CVPR 2023)
&lt;!-- raw HTML omitted -->&lt;strong>Authors:&lt;/strong> Honglin Li, Chenglu Zhu, Yunlong Zhang, Yuxuan Sun, Zhongyi Shui, Wenwei Kuang, Sunyi Zheng, Lin Yang&lt;/p>
&lt;h3 id="the-scientific-question-the-wsi-compute-wall">The Scientific Question: The WSI &amp;ldquo;Compute Wall&amp;rdquo;&lt;/h3>
&lt;p>Pathology models typically use ImageNet pre-trained backbones, which suffer from a domain gap. However, end-to-end full fine-tuning on WSIs (thousands of patches) requires VRAM far beyond standard GPU capabilities.&lt;/p>
&lt;h3 id="core-method-sparse-critical-instance-selection">Core Method: Sparse Critical Instance Selection&lt;/h3>
&lt;p>Based on &lt;strong>Variational Information Bottleneck (VIB)&lt;/strong> theory, we screen for the &amp;ldquo;minimal sufficient statistics&amp;rdquo;:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>IB Module Screening:&lt;/strong> A lightweight module selects the Top-K diagnostic instances (usually &amp;lt;1000) based on mutual information maximization.&lt;/li>
&lt;li>&lt;strong>Sparse Backpropagation:&lt;/strong> Gradients are back-propagated &lt;strong>only&lt;/strong> through selected instances during fine-tuning, reducing computational overhead by &lt;strong>&amp;gt;10x&lt;/strong>.&lt;/li>
&lt;/ol>
&lt;p>
&lt;figure id="figure-figure-3-the-three-stage-vib-process-learning-the-bottleneck---sparse-representation-fine-tuning---retraining-the-wsi-head">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img alt="VIB Fine-tuning" srcset="
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig5_hua46c3dbc047e93eecb871c2a8bfa86e4_32462_ad15a90ec3ba0a173528e4a1266a73db.webp 400w,
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig5_hua46c3dbc047e93eecb871c2a8bfa86e4_32462_d4cc262d81b0c43b9b27a21c0868aad8.webp 760w,
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig5_hua46c3dbc047e93eecb871c2a8bfa86e4_32462_1200x1200_fit_q80_h2_lanczos_2.webp 1200w"
src="https://hzzcl.github.io/resume.io/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig5_hua46c3dbc047e93eecb871c2a8bfa86e4_32462_ad15a90ec3ba0a173528e4a1266a73db.webp"
width="760"
height="332"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Figure 3: The three-stage VIB process: Learning the Bottleneck -&amp;gt; Sparse representation fine-tuning -&amp;gt; Retraining the WSI head.
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;h3 id="results-3">Results&lt;/h3>
&lt;div style="overflow-x: auto; display: block; width: 100%;">
Method,Camelyon-16 F1,Camelyon-16 AUC,TCGA-BRCA F1,TCGA-BRCA AUC,LBP-CECA F1,LBP-CECA AUC
Full Supervision,0.967±0.005,0.992±0.003,-,-,0.741±0.006,0.942±0.002
RNN-MIL [7],0.834±0.017,0.861±0.021,0.776±0.035,0.871±0.033,-,-
AB-MIL [19],0.828±0.013,0.851±0.025,0.771±0.040,0.869±0.037,0.525±0.017,0.845±0.002
DS-MIL [25],0.857±0.023,0.892±0.012,0.775±0.044,0.875±0.041,-,-
CLAM-SB [30],0.839±0.018,0.875±0.028,0.797±0.046,0.879±0.019,0.587±0.014,0.860±0.005
TransMIL [38],0.846±0.013,0.883±0.009,0.806±0.046,0.889±0.036,0.533±0.006,0.850±0.007
DTFD-MIL [45],0.882±0.008,0.932±0.016,0.816±0.045,0.895±0.042,0.569±0.026,0.847±0.003
FT+ CLAM-SB,0.911±0.017,0.956±0.013,0.845±0.032,0.935±0.027,0.718±0.010,0.907±0.005
FT+ TransMIL,0.923±0.012,0.967±0.003,0.848±0.044,0.945±0.020,0.720±0.024,0.918±0.004
FT+ DTFD-MIL,0.921±0.007,0.962±0.006,0.849±0.027,0.951±0.016,0.723±0.008,0.922±0.005
Mean-pooling,0.629±0.029,0.591±0.012,0.818±0.022,0.910±0.032,0.350±0.017,0.735±0.006
Max-pooling,0.805±0.012,0.824±0.016,0.644±0.179,0.826±0.096,0.636±0.064,0.893±0.019
KNN (Mean),0.468±0.000,0.506±0.000,0.633±0.066,0.749±0.055,0.393±0.000,0.650±0.000
KNN (Max),0.559±0.000,0.535±0.000,0.524±0.032,0.639±0.063,0.477±0.000,0.743±0.000
FT+ Mean-pooling,0.842±0.006,0.831±0.007,0.866±0.035,0.952±0.018,0.685±0.014,0.900±0.002
FT+ Max-pooling,0.927±0.011,0.969±0.004,0.852±0.043,0.948±0.019,0.695±0.013,0.912±0.004
FT+ KNN (Mean),0.505±0.000,0.526±0.000,0.784±0.044,0.907±0.034,0.529±0.000,0.737±0.000
FT+ KNN (Max),0.905±0.000,0.916±0.000,0.802±0.063,0.882±0.036,0.676±0.000,0.875±0.000
&lt;/div>
&lt;div style="text-align: center; font-size: 0.6em; color: #555; margin-top: 5px;">
Table 3. Slide-Level Classification by using the IN-lK pre-trained backbone or the proposed fine-tuned (FT) in three datasets. Top RowsDifierent Mll, architectures are compared to select the top 3 $OTA methods to validate the transfer learning performance using the IN-lKbre-trained backbone or the FT, Bottom Rows. The competition of various traditional aggrcgalion and feature evaluation methods by usingore-trained IN-lK or the FT.
&lt;/div>
&lt;p>
&lt;figure id="figure-figure-6-t-sne-visualization-of-different-representations-onpatches-our-method-converts-chaotic-imagenet-lk-and-ssl-fea-tures-into-a-more-task-specifc-and-separable-distribution-thecluster-evaluation-measurement-v-scores-show-weakly-super-vised-fine-tuned-features-are-more-close-to-full-supervision-com-pared-to-others-a-imagenet-1k-pretraining-b-full-patch-supervi-sioncself-supervised-learning-d-fine-tuning-with-wsi-labels">
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img alt="VIB T-SNE" srcset="
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig6_hue4f6be31abb9cb7384858f7bd2691b10_45724_02cef4cbdb0fbde3da10aae5adb9b403.webp 400w,
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig6_hue4f6be31abb9cb7384858f7bd2691b10_45724_d2eb8a2651cf986bb2ca3f6dac87e01c.webp 760w,
/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig6_hue4f6be31abb9cb7384858f7bd2691b10_45724_1200x1200_fit_q80_h2_lanczos_2.webp 1200w"
src="https://hzzcl.github.io/resume.io/resume.io/insights/reconstructing-computational-paradigms-for-pathological-image-analysis/fig6_hue4f6be31abb9cb7384858f7bd2691b10_45724_02cef4cbdb0fbde3da10aae5adb9b403.webp"
width="486"
height="540"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;figcaption>
Figure 6. T-SNE visualization of different representations onpatches. Our method converts chaotic ImageNet-lK and SSL fea-tures into a more task-specifc and separable distribution. Thecluster evaluation measurement, v-scores, show weakly super-vised fine-tuned features are more close to full supervision com-pared to others. a. ImageNet-1k pretraining. b. Full patch supervi-sion.c.Self-supervised Learning. d. Fine-tuning with WSI labels.
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Performance Leap:&lt;/strong> On Camelyon16, a VIB fine-tuned ResNet-50 with simple Max-pooling achieved an AUC of &lt;strong>0.969&lt;/strong>, a &lt;strong>14.5%&lt;/strong> jump over the ImageNet baseline (0.824).&lt;/li>
&lt;li>&lt;strong>Feature Space:&lt;/strong> t-SNE visualization confirms significantly improved inter-class separation.&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="summary">Summary&lt;/h2>
&lt;p>This research directly targets the &amp;ldquo;compute&amp;rdquo; and &amp;ldquo;data&amp;rdquo; bottlenecks in pathological AI deployment.&lt;/p>
&lt;ul>
&lt;li>&lt;strong>LongMIL &amp;amp; ACMIL&lt;/strong> reconstruct WSI attention mechanisms.&lt;/li>
&lt;li>&lt;strong>BD Framework&lt;/strong> solves the pain point of missing clinical data.&lt;/li>
&lt;li>&lt;strong>VIB Fine-tuning&lt;/strong> breaks the compute barrier for large-scale model optimization.&lt;/li>
&lt;/ul>
&lt;p>Together, these provide the core algorithmic support for building high-precision, low-cost, and robust pathological AI systems.&lt;/p></description></item></channel></rss>