and up-to-date on our AI empowered digital cytology solutions.
.png)
The implementation of the AI-driven workflow resulted in significant improvements:
- Increased Speed: Pathologists reported a 30% to 50% increase in workflow speed, with the diagnosis of benign cases often reduced to mere seconds.
- Enhanced Sensitivity: The AI was slightly biased toward sensitivity, ensuring that atypical cells were prioritized for review.
- Objective Data: The system provides quantitative metrics for atypical and suspicious cells, removing subjectivity from the diagnostic process.
- Improved Efficiency: The reduction in screening time allowed the lab to free up capacity for growth without compromising accuracy.
Clinical use of AIxURO resulted in a 30–50% faster overall read time, improved sensitivity with moreobjective data for atypical cases, and a noticeably better user experience. The system is easy to use,avoids unnecessary features, integrates well with glass slides, and supports increased efficiency andgrowth in routine clinical practice.
- A total of 100 ThinPrep thyroid FNAC cases with established ground truth cytologic diagnoses (The Bethesda System for Reporting Thyroid Cytopathology) using a consensus of the original cytology report interpretation and the surgical pathology findings were analyzed:
- 5 TBS-I, 35 TBS-II, 15 TBS-III, 15 TBS-IV, and 30 TBS-VI cases
- Each case was digitized into single-layer whole-slide images (WSIs) using Leica AT2 or 3DHISTECH scanners.
- The AIxTHY algorithm was applied to identify abnormal cells and quantify cytomorphologic features on each WSI.
- Eight independent reviewers (3 cytopathologists, 5 cytologists) assessed each case under two arms, separated by a two-week washout period:
- Arm 1- Microscopy only
- Arm 2- AI-assisted digital review
- Performance Metrics: Comparison of study diagnosis with ground truth cytology diagnosis for two arms across 800 total reads.
- Diagnostic agreement for TBS categorization
- Reclassification of TBS category
- Binary reclassification of the TBS-I reads from microscopy by AI-assisted review
- In the Nondiagnostic (TBS-I) category, microscopy (Arm 1) achieved 80.0% (32/40) agreement with the ground truth, but to our surprise, AI-assisted review (Arm 2) reduced nondiagnostic calls to 57.5% (23/40).
- Agreement for indeterminate TBS-III cases increased from 29.2% (35/120) in the microscopy arm to 46.7% (56/120) in the AI-assisted review arm.
- Overall, TBS-I reads decreased from 10.0% (80/800) in microscopy to 6.8% (54/800) in AI-assisted review, representing a 32.5% relative reduction (26 fewer TBS-I interpretations).
- Binary reclassification of the 80 TBS-I reads from microscopy (TBS-II as negative; TBS-III+ as positive) showed that AI-assisted review 2 reclassified:
- 12 as negative (32.1% specificity gain) and
- 4 as positive (40.0% sensitivity gain),
- thereby improving overall diagnostic accuracy and interpretive confidence for previously indeterminate cases.
- AI-assisted review reduced nondiagnostic (TBS-I) interpretations originally rendered by microscopy and improved diagnostic accuracy in thyroid FNAC.
- By reclassifying TBS-I cases into actionable Bethesda categories, AIxTHY enhanced both specificity and sensitivity.
- Reassignment to a benign or malignant diagnoses expedites definitive care.
- These results highlight the potential of AIxTHY as an effective adjunct for digital cytology workflows, supporting more reliable and efficient thyroid FNAC interpretation.
- 100 urine cytology cases were selected, comprising 60 positive and 40 negative diagnoses for bladder cancer.
- Each slide was digitized into a whole-slide image (WSI) using a Hamamatsu S360 scanner.
- AIxURO was applied to detect abnormal urothelial cells and quantify cytomorphologic features on each WSI.
- Four cytologists independently reviewed all cases under microscopy and AI-assisted review (AIxURO) arms, separated by a washout period, and data was pooled for 400 reads/arm.
- Visual fatigue was assessed using the Computer Vision Syndrome Questionnaire (CVS-Q), which rates 16 ocular and visual symptoms by frequency and intensity (total score of 0–32; scores ≥6 indicate CVS).
- Performance Metrics: Comparison of study diagnosis with ground truth cytology diagnosis for two arms across 400 total read.
- Binary diagnostic accuracy (positive: AUC and above; negative: NHGUC)
- Mean diagnostic turnaround time (second)
- In CVS-Q evaluation, microscopy produced higher mean scores than AIxURO at both the first 50 (5.8 vs. 1.2) and last 50 (10.5 vs. 2.8) reads, indicating greater visual fatigue with microscopy.
- CVS-Q scores progressively increased during microscopy (the mean score: 1.8 to 10.5), reaching levels consistent with CVS, whereas only one reviewer exceeded the CVS threshold after 100 reads in AIxURO.
- These results indicate that AIxURO reduces both the likelihood and severity of visual fatigue.
- In diagnostic performance, AIxURO achieved higher sensitivity (0.975 vs. 0.867) and accuracy (0.960 vs. 0.890), with substantially shorter mean turnaround time (24.2 seconds vs. 80.3 seconds) compared with microscopy, indicating improved accuracy and efficiency with AI assistance.
- Compared with microscopy, AIxURO was associated with lower CVS-Q scores, reflecting reduced visual fatigue, along with higher diagnostic accuracy and shorter reporting times.
- These findings support integrating the AI platform into routine cytology practice to enhance user comfort, reduce occupational visual strain, and improve diagnostic performance.
- 14 urine cytology specimens, equally divided and prepped into CytoSpin and ThinPrep specimens, were scanned for WSI by Leica Aperio AT2 and Hamamatsu NanoZoom S360 at 3 focus modes (default, semiautomatic, and manual) for single layer scanning and with a manual focus mode for 21 Z-layer scanning
- Performance metrics evaluated included scanning success rate, AI-algorithm-inferred atypical cell numbers and coverage rates (atypical cells in single or multiple Z-layers divided by total atypical cells), scanning time, and image file size
Success Scanning Rates
- Default Mode Scanning: 85.7% (Cytospin, Leica) and 92.9% (ThinPrep, Leica; Cytospin and ThinPrep, Hamamatsu) success rates
- Semi-Auto Mode Scanning: 92.9% (Cytospin, Leica; Cytospin, Hamamatsu and ThinPrep, Hamamatsu) or 100% (ThinPrep, Leica) success
- Manual Scanning: 100% success (all preps with all scanners)
Scanning Times (median)
- Cytospin, Leica: 1010 seconds
- Cytospin, Hamamatsu: 275 seconds
- ThinPrep, Leica: 5357 seconds
- ThinPrep, Hamamatsu: 1429 seconds
File Image Sizes of WSI (using manual focus mode and 21 Z-layer scan settings)
- Cytospin, Leica: 2.0 GB
- Cytospin, Hamamatsu: 4.4 GB
- ThinPrep, Leica: 13.1 GB
- ThinPrep, Hamamatsu: 25.5 GB
Semi-automatic and manual focus modes provide the most successful slide scanning with rates up to 100%. A minimum of 9-layer Z-stacking at 1 µm intervals is necessary to cover 80% of atypical cells. Cytospin preparations take less time to scan and result in smaller file sizes than do ThinPrep slides, regardless of the scanner. Z-stacking enhances AI inferred quality and coverage rates of atypical cells but with longer scanning times and larger image files.
- 20 urine cytology slides, 5 types of preparation (cytospin, ThinPrep nonGYN, ThinPrep Urocyte, and BD CytoRich) were digitized using 3 digital scanners (Roch DP200, Roch DP600, and Hamamatsu Nanozoomer 360)
- Images were evaluated for quality, focus and color from the default, manual and advanced scanning modes by a senior cytologist
- Roche DP200 and DP600 scanners achieved good quality WSI (30-50% in default mode and 40-65% in manual mode)
- Hamamatsu achieved 90% good quality imaging only in the manual mode, whereas the default quality dropped to 15%. It also had a lower coefficient of variation for average number of atypical cells across all preparation types
Overall, Hamamatsu scanner showed better performance for image quality in the manual mode than Roche scanners, which performed better in the default mode. Hamamatsu scanners also had improved detection of atypical urothelial cells among triplicate images from the same samples compared to Roche scanners.
- 116 urine cytology slide scanned as WSI and analyzed by AI deep-learning software system and ranked as most suspicious or atypical urothelial cells based on The Paris System 2.0 (TPS 2.0) for Reporting Urine Cytology
- Cell image gallery displayed 14 top abnormal cells in viewing software for expert analysis
- 1 cytopathologist (1 CP) and 2 cytologists (CP) assigned a TPS 2.0 diagnosis to each case: NHGUC, AUC, SHGUC or HGUC; results were compared between reviewers
- CP demonstrated the highest specificity, PPV, accuracy and agreement with the ground truth compared with the cytologists (CPs), who had greater sensitivity and NPV
- Both CP and CTs had reduced sensitivity, PPV and agreement in the diagnosis of AUC vs NHGUC
Decreased sensitivity and PPV in differentiating AUC from NHGUC may be the result of the usual clinical workflow, where cytologists initially screen and mark suspicious regions as abnormal, but defer to CP for a more specific interpretation; it may also reflect less experience with digital cytology interpretation.
- 20 urine cytology slides, 5 types of preparation (cytospin, ThinPrep nonGYN, ThinPrep Urocyte, and BD CytoRich) were digitized using 3 digital scanners (Roch DP200, Roch DP600, and Hamamatsu Nanozoomer 360)
- Images were evaluated for quality, focus and color from the default, manual and advanced scanning modes by a senior cytologist
- Roche DP200 and DP600 scanners achieved good quality WSI (30-50% in default mode and 40-65% in manual mode)
- Hamamatsu achieved 90% good quality imaging only in the manual mode, whereas the default quality dropped to 15%. It also had a lower coefficient of variation for average number of atypical cells across all preparation types
Overall, Hamamatsu scanner showed better performance for image quality in the manual mode than Roche scanners, which performed better in the default mode. Hamamatsu scanners also had improved detection of atypical urothelial cells among triplicate images from the same samples compared to Roche scanners.
- 60 paired cytospin and CytoRich slides from 30 pts scanned into WSI and inferred using AIxURO AI-driven software, ranking into most top 24 suspicious cells/groups
- 3 senior cytologists with variable digital interpretive experience (A= over 1 year; B = 6 mo-1 yr; C = less than one month) evaluated interface gallery of images to render a diagnosis
- Dx performance was consistent for the 2 preparation types, with cytologists A & B showing excellent performance
- All reviewers spent similar amounts of time on review per preparation type, with cytologist C taking the least amount of time.
The AI-assisted tool allowed for excellent performance to aid interpretation of upper GU tract urothelial carcinoma
- 116 urine cytology slides digitized into WSI and analyzed by AI-assisted software to identify and categorize abnormal cells into suspicious (likely SHGUC or HGUC) or atypical (AUC) categories
- 1 cytopathologist (CP) and 2 cytologists (CP) reviewed all slides microscopically (Arm 1), as a WSI (Arm 2) and with AI-assistance presenting the top 24 most abnormal cells in a gallery display (Arm 3), with a 2-week wash-out period between reviews
- Performance Metrics: Comparison of expert panel consensus of the glass slide (86 negative, 30 positive) to the review diagnoses to calculate sensitivity, specificity, PPV and NPV, along with the time spent examining the slide for each arm per individual
- AI-assisted software (AIxURO) improved the overall sensitivity (from 82.2% to 92.2%) and NPV (from 93.4% to 96.5%) but decreased specificity (from 87.2% to 75.6%), PPV (from 69.2% to 56.8%) and accuracy (from 85.9% to 79.9%) compared to microscopy. The WSI performance was the worst arm.
- Time for slide review was significantly reduced overall (from 159.9 min for microscopy to 106.3 minutes for AI-assisted), as well as per individual slide (from 1.38 min to 0.92 min)
AI-assistance improves the sensitivity and NPV of urine cytology review, but at the expense of a decrease in specificity, PPV and accuracy. This may indicate that AI-assistance will facilitate the detection of abnormal urothelial cells while lowering the time required for slide review.
- 52 urine cytology slides from bladder cancer patients (24 cytospin, 16 ThinPrep, 12 Cytorich) were digitally scanned with Leica AT2 using conventional Z-stacking (Z = 0 to Z = +10 layers above and below at 1 µm intervals) and compared with using a heuristic scanning simulation method with AI to determine regions of interest (ROI) for scanning, whereby the software determines the ideal number of layers to scan, from 3 layers (Z = 0+1) to 21 layers (Z = +10).
- Performance Metrics: Total number of suspicious cells; coverage rate (ratio of suspicious cells in single vs multiple Z-layers to the total suspicious cells in 21 Z-layers); scanning time (seconds); and image file size for storage (in gigabytes).
- Heuristic scanning was comparable to Z-stacking with similar average numbers of suspicious cell coverage rates (79.3% cytospin; 85.9% TP; 78.3% CytoRich) to those of Z-stacking (81.9% in 5 layers for cytospin, 87.1% in 9 layers for TP, 82.9% in 7 layers for CytoRich).
- Heuristic scanning significantly reduced scanning time (65-72%) and image file size (by 46-64%).
- Heuristic scanning in low cellularity slides was more accurate (7/7 correct diagnoses) than using single Z-layer WSI (4/7 correct diagnoses)
Heuristic scanning is an effective alternative to conventional Z-stacking to identify suspicious cells on urine cytology slides, with the advantage of reducing scanning time and image file size.
- 100 paired cytospin and CytoRich urine cytology slides from 50 bladder cancer patients digitized to create WSI and analyzed with AI-assisted software that identifies the most atypical urothelial cells and displays them in a gallery of the top 24
- 3 cytologists with variable digital pathology experience (A= over 12 months; B= 6-12 months; C= less than one month) reviewed the AI-assisted images and interpreted them using The Paris System categories
- No significant difference in diagnostic performance of cytologists between preparations (cytospin vs CytoRich)
- Digital pathology experience improved performance, with C having the poorest performance
- Sensitivity = 84-96%; Specificity = 92- 96%; PPV = 91.3-95.7%; NPV = 85.2-95.8%; accuracy = 88-94%
- Cytologist C took the least time reviewing slides (median 29.5-30 sec) compared to the other 2 (median 63.5-71.5 sec)
AI-assistance markedly improves efficiency for the interpretation of upper urinary tract cytology. Experience using the software system enhances overall performance and diagnostic concordance.
- 116 urine cytology WSI (76 cytospin, 40 CytoRich) scanned with Leica AT2 at 20X and analyzed and ranked “top 24” by AI-deep learning algorithm for most abnormal urothelial cells
- 1 cytopathologist (CP) and 2 cytologists (CT) evaluated WSI with AI-assistance and diagnosed slides as NHGUC, AUC, SHGUC or HGUC according to The Paris System 2.0.
- Performance Metrics: Concordance with original cytologic diagnosis and time to diagnosis compared to conventional microscopy
- AI-assistance increased sensitivity for all 2 reviewers (from 83.3-100%) and improved time efficiency from 159.9 mins to 106.3 min)
- Specificity was reduced for the CTs, but not the CP
AI-assistance using a deep learning algorithm to identify and rank abnormal cells improves overall clinical sensitivity for the detection of urothelial carcinoma while improving clinical efficiency through time-savings.

