Difference between revisions of "Pythia"

From The Digital Classicist Wiki
Jump to navigation Jump to search
m
m (trimmed cats)
 
Line 97: Line 97:
 
[[category:corpora]]
 
[[category:corpora]]
 
[[category:dataset]]
 
[[category:dataset]]
[[category:deep learning]]
 
 
[[category:digital library]]
 
[[category:digital library]]
 
[[category:digitization]]
 
[[category:digitization]]
[[category:inscriptions]]
 
 
[[category:machine learning]]
 
[[category:machine learning]]
 
[[category:NLP]]
 
[[category:NLP]]
[[category:neural networks]]
 
 
[[category:Openaccess]]
 
[[category:Openaccess]]
 
[[category:Opensource]]
 
[[category:Opensource]]

Latest revision as of 17:07, 14 July 2020

Available

Authors

  • Thea Sommerschield, University of Oxford
  • Yannis Assael, DeepMind
  • Jonathan Prag, University of Oxford

Description

PYTHIA: a deep learning model for the automatic restoration of Greek inscriptions is the first ancient text restoration model that recovers missing characters from a damaged text input using deep neural networks. Bringing together the disciplines of ancient history and deep learning, this research offers a fully automated aid to the epigraphic restoration of fragmentary inscriptions, providing ancient historians with multiple textual restorations, as well as the confidence level for each hypothesis.

Pythia takes a sequence of damaged text as input, and is trained to predict character sequences comprising hypothesised restorations of ancient Greek inscriptions. The architecture works at both the character- and word-level, thereby effectively handling long-term context information, and dealing efficiently with incomplete word representations. This makes it applicable to all disciplines dealing with ancient texts (philology, papyrology, codicology) and applies to any language (ancient or modern).

To train the model, we wrote a non-trivial pipeline to convert PHI, the largest digital corpus of ancient Greek inscriptions, to machine actionable text, which we call PHI-ML. On PHI-ML, Pythia's predictions achieve a 30.1% character error rate, compared to the 57.3% of human epigraphists. Moreover, in 73.5% of cases the ground-truth sequence was among the Top-20 hypotheses of Pythia, which effectively demonstrates the impact of such an assistive method on the field of digital epigraphy, and sets the state-of-the-art in ancient text restoration.

To aid further research in the field we created a freely accessible online interactive Python notebook, where researchers can query one of our models to get text restorations and visualise the attention weights. Follow the instructions on the Colab notebook to restore texts for your own personal research.

Furthermore, the PHI-ML dataset and processing pipeline have been published on GitHub, so that any researcher may regenerate PHI-ML and train new models offline. Follow the instructions below under the section "Pythia offline".

References

Article available at:

To quote this work:

Assael, Y., Sommerschield, T., Prag, J. “Restoring Ancient Text Using Deep Learning: A Case Study on Greek Epigraphy.” In Proceedings of the Conference in Empirical Methods in Natural Language Processing (EMNLP - IJCNLP 2019). Association for Computational Linguistics. 6369-6376.

When using any of this project's source code, please cite:

@inproceedings{assael2019restoring,
  title={Restoring ancient text using deep learning: a case study on {Greek} epigraphy},
  author={Assael, Yannis and Sommerschield, Thea and Prag, Jonathan},
  booktitle={Empirical Methods in Natural Language Processing},
  pages={6369--6376},
  year={2019}
}

Pythia offline

The following snippets provide references for regenerating PHI-ML and training new models offline.

Dependencies

pip install -r requirements.txt && \
python -m nltk.downloader punkt

PHI-ML dataset generation

# Download PHI (this will take a while)
python -c 'import pythia.data.phi_download; pythia.data.phi_download.main()'

# Process and generate PHI-ML
python -c 'import pythia.data.phi_process; pythia.data.phi_process.main()'

Preprocessed PHI-ML uploaded by @Holger.Danske800: link

Training

python -c 'import pythia.train; pythia.train.main()'

Evaluation

python -c 'import pythia.test; pythia.test.main()' --load_checkpoint="your_model_path/"

Docker execution

./build.sh
./run.sh <GPU_ID> python -c 'import pythia.train; pythia.train.main()'

Media

Our research has been featured in blog posts by DeepMind and the University of Oxford, and has been published in articles on The Times, NewScientist, Financial Times, La Repubblica, Ekathimerini, TechCrunch and others.

It has also been presented at EMNLP - IJCNLP 2019 in Hong Kong, at the British School at Rome, the Oxford Epigraphy Workshop, the OIKOS National Research School in Classical Studies, during lectures at Venice Ca' Foscari and Rome La Sapienza, and most recently at the Digital Classicist London Seminar.

Pythia is a partner project of epigraphy.info.

Contact