Background and objectives: Despite recent advances in artificial intelligence for medical images, the development of a robust deep learning model for identifying malignancy on pathology slides has been limited by problems related to substantial inter- and intra-institutional heterogeneity attributable to tissue preparation. The paucity of available data aggravates this limitation for relatively rare cancers. Here, using ovarian cancer pathology images, we explored the effect of image-to-image style transfer approaches on diagnostic performance. Methods: We leveraged a relatively large public image set for 142 patients with ovarian cancer from The Cancer Image Archive (TCIA) to fine-tune the renowned deep learning model Inception V3 for identifying malignancy on tissue slides. As an external validation, the performance of the developed classifier was tested using a relatively small institutional pathology image set for 32 patients. To reduce deterioration of the performance associated with the inter-institutional heterogeneity of pathology slides, we translated the style of the small image set of the local institution into the large image set style of the TCIA using cycle-consistent generative adversarial networks. Results: Without style transfer, the performance of the classifier was as follows: area under the receiver operating characteristic curve (AUROC) = 0.737 and area under the precision recall curve (AUPRC) = 0.710. After style transfer, AUROC and AUPRC improved to 0.916 and 0.898, respectively. Conclusions: This study provides a case of the successful application of style transfer technology to generalize a deep learning model into small image sets in the field of digital pathology. Researchers at local institutions can select this collaborative system to make their small image sets acceptable to the deep learning model.