Track
Case ReportsAbstract
Artificial intelligence tools offer potential value in dermatopathology education and decision support, yet differences in their descriptive outputs remain underexplored. This study compared two language-based AI models, Perplexity and ChatGPT, in responding to identical text-only prompts requesting microscopic, immunohistochemical, and differential diagnostic features of Spitz nevus and Spitz melanoma without image input. ChatGPT produced expansive, educationally oriented narratives with detailed subsections addressing architecture, cytology, and immunohistochemistry, often including explanatory context such as low-power symmetry, maturation with depth, mitotic patterns, and interpretations of immunostain gradients (HMB45, Ki-67, p16, PRAME, BAP1). Perplexity generated concise, highly structured descriptions emphasizing the most diagnostically relevant features, with precise terminology and clear contrasts between benign and malignant profiles, and provided literature-backed detail. In Spitz nevus, both AIs correctly described symmetry, well-circumscribed nests of epithelioid and spindle melanocytes, presence of Kamino bodies, low superficial mitotic activity, and gradient HMB45 staining. In Spitz melanoma, both identified asymmetry, poor circumscription, cytologic pleomorphism, deep and atypical mitoses, lack of maturation, high Ki-67 index, and diffuse HMB45 expression, noting differential diagnoses including atypical Spitz tumor and conventional melanoma. Perplexity’s output was concise and reference-oriented, while ChatGPT’s was broader, organized into multiple educational subsections, and more explanatory. This comparison suggests that text-only AI prompts can yield clinically accurate histopathologic representations, with Perplexity delivering precision suited to scholarly and diagnostic contexts, and ChatGPT producing accessible, teaching-friendly narratives. Combining both output styles may optimize AI use in dermatopathology education and complex melanocytic lesion assessment.