Difference between revisions of "Pythia"

From The Digital Classicist Wiki
Jump to navigation Jump to search
Line 1: Line 1:
 
<div id="mw-content-text" lang="en-GB" dir="ltr" class="mw-content-ltr">
 
<div id="mw-content-text" lang="en-GB" dir="ltr" class="mw-content-ltr">
<div class="mw-parser-output"><div id="toc" class="toc">
+
<div class="mw-parser-output"></div>
<div class="toctitle" lang="en-GB" dir="ltr">
 
 
 
<h2>Contents</h2>
 
</div>
 
 
 
<ul>
 
<li class="toclevel-1 tocsection-1"><a href="#Available"><span class="tocnumber">1</span> <span class="toctext">Available</span></a></li>
 
<li class="toclevel-1 tocsection-2"><a href="#Authors"><span class="tocnumber">2</span> <span class="toctext">Authors</span></a></li>
 
<li class="toclevel-1 tocsection-3"><a href="#Description"><span class="tocnumber">3</span> <span class="toctext">Description</span></a></li>
 
<li class="toclevel-1 tocsection-4"><a href="#References"><span class="tocnumber">4</span> <span class="toctext">References</span></a></li>
 
<li class="toclevel-1 tocsection-5"><a href="#Pythia_offline"><span class="tocnumber">5</span> <span class="toctext">Pythia offline</span></a></li>
 
<li class="toclevel-1 tocsection-6"><a href="#Media"><span class="tocnumber">65</span> <span class="toctext">Media</span></a></li>
 
<li class="toclevel-1 tocsection-7"><a href="#Contact"><span class="tocnumber">65</span> <span class="toctext">Contact</span></a></li>
 
 
 
</div>
 
  
 
<h2>
 
<h2>
Line 44: Line 29:
  
 
<p>
 
<p>
<b>Pythia</b> is the first ancient text restoration model that recovers missing characters from a damaged text input. Its architecture is carefully designed to handle long-term context information, and deal efficiently with missing or corrupted character and word representations.  
+
<b>Pythia</b> is the first ancient text restoration model that recovers missing characters from a damaged text input. Bringing together the disciplines of ancient history and deep learning, the present work offers a fully automated aid to the text restoration task, providing ancient historians with multiple textual restorations, as well as the confidence level for each hypothesis.
 +
</p>
 +
 
 +
<p>
 +
<b>Pythia</b> takes a sequence of damaged text as input, and is trained to predict character sequences comprising hypothesised restorations of ancient Greek inscriptions (texts written in the Greek alphabet dating between the seventh century BCE and the fifth century CE). The architecture works at both the character- and word-level, thereby effectively handling long-term context information, and dealing efficiently with incomplete word representations. This makes it applicable to all disciplines dealing with ancient texts (philology, papyrology, codicology) and applies to any language (ancient or modern).
 
</p>
 
</p>
  
Line 64: Line 53:
  
 
<p>
 
<p>
Assael, Y., Sommerschield, T., Prag, J. “Restoring Ancient Text Using Deep Learning: A Case Study on Greek Epigraphy.” In
+
Article available at:
''Empirical Methods in Natural Language Processing (EMNLP - IJCNLP 2019)''. Association for Computational Linguistics. 6369-6376.
 
</p>
 
 
 
<p>
 
Available at:
 
 
<ul>
 
<ul>
 
<li>[https://www.aclweb.org/anthology/D19-1668/ ACL anthology]</li>
 
<li>[https://www.aclweb.org/anthology/D19-1668/ ACL anthology]</li>
 
<li>[https://arxiv.org/abs/1910.06262 arXiv preprint]</li>
 
<li>[https://arxiv.org/abs/1910.06262 arXiv preprint]</li>
 
</ul>
 
</ul>
 +
</p>
 +
 +
<p>
 +
<blockquote>Assael, Y., Sommerschield, T., Prag, J. “Restoring Ancient Text Using Deep Learning: A Case Study on Greek Epigraphy.” In
 +
''Empirical Methods in Natural Language Processing (EMNLP - IJCNLP 2019)''. Association for Computational Linguistics. 6369-6376.</blockquote>
 
</p>
 
</p>
  
Line 98: Line 87:
  
 
<p>
 
<p>
'''Dependencies'''
+
* '''Dependencies'''
 
<pre>
 
<pre>
 
pip install -r requirements.txt && \
 
pip install -r requirements.txt && \
Line 104: Line 93:
 
</pre>
 
</pre>
  
'''PHI-ML dataset generation'''
+
* '''PHI-ML dataset generation'''
 
<pre>
 
<pre>
 
# Download PHI (this will take a while)
 
# Download PHI (this will take a while)
Line 115: Line 104:
 
Preprocessed PHI-ML uploaded by @Holger.Danske800: [https://drive.google.com/drive/folders/1PQaWYmB02Sc2OC9yokajcsw65wIcLxGD link]
 
Preprocessed PHI-ML uploaded by @Holger.Danske800: [https://drive.google.com/drive/folders/1PQaWYmB02Sc2OC9yokajcsw65wIcLxGD link]
  
'''Training'''
+
* '''Training'''
 
<pre>
 
<pre>
 
python -c 'import pythia.train; pythia.train.main()'
 
python -c 'import pythia.train; pythia.train.main()'
 
</pre>
 
</pre>
  
'''Evaluation'''
+
* '''Evaluation'''
 
<pre>
 
<pre>
 
python -c 'import pythia.test; pythia.test.main()' --load_checkpoint="your_model_path/"
 
python -c 'import pythia.test; pythia.test.main()' --load_checkpoint="your_model_path/"
 
</pre>
 
</pre>
  
'''Docker execution'''
+
* '''Docker execution'''
 
<pre>
 
<pre>
 
./build.sh
 
./build.sh
Line 153: Line 142:
 
<li>[http://thea.sommerschield@classics.ox.ac.uk thea.sommerschield@classics.ox.ac.uk]</li>
 
<li>[http://thea.sommerschield@classics.ox.ac.uk thea.sommerschield@classics.ox.ac.uk]</li>
 
</ul>
 
</ul>
 +
 +
[[category:tools]]
 +
[[category:projects]]
 +
[[category:Epigraphy]]
 +
[[category:corpora]]
 +
[[category:dataset]]
 +
[[category:deep learning]]
 +
[[category:digital library]]
 +
[[category:digitization]]
 +
[[category:inscriptions]]
 +
[[category:machine learning]]
 +
[[category:NLP]]
 +
[[category:neural networks]]
 +
[[category:Openaccess]]
 +
[[category:Opensource]]
 +
[[category:repositories]]
 +
[[category:text mining]]
 +
[[category:web service]]
  
 
</div>
 
</div>

Revision as of 19:39, 9 June 2020

Available

Authors

  • Thea Sommerschield, University of Oxford
  • Yannis Assael, DeepMind
  • Jonathan R. W. Prag, University of Oxford

Description

Ancient History relies on disciplines such as Epigraphy, the study of ancient inscribed texts, for evidence of the recorded past. However, these texts, "inscriptions", are often damaged over the centuries, and illegible parts of the text must be restored by specialists, known as epigraphists. This work presents a novel assistive method for providing text restorations using deep neural networks.

Pythia is the first ancient text restoration model that recovers missing characters from a damaged text input. Bringing together the disciplines of ancient history and deep learning, the present work offers a fully automated aid to the text restoration task, providing ancient historians with multiple textual restorations, as well as the confidence level for each hypothesis.

Pythia takes a sequence of damaged text as input, and is trained to predict character sequences comprising hypothesised restorations of ancient Greek inscriptions (texts written in the Greek alphabet dating between the seventh century BCE and the fifth century CE). The architecture works at both the character- and word-level, thereby effectively handling long-term context information, and dealing efficiently with incomplete word representations. This makes it applicable to all disciplines dealing with ancient texts (philology, papyrology, codicology) and applies to any language (ancient or modern).

To train it, we wrote a non-trivial pipeline to convert PHI, the largest digital corpus of ancient Greek inscriptions, to machine actionable text, which we call PHI-ML. On PHI-ML, Pythia's predictions achieve a 30.1% character error rate, compared to the 57.3% of human epigraphists. Moreover, in 73.5% of cases the ground-truth sequence was among the Top-20 hypotheses of Pythia, which effectively demonstrates the impact of such an assistive method on the field of digital epigraphy, and sets the state-of-the-art in ancient text restoration.

To aid further research in the field we created a freely accessible online interactive Python notebook, where researchers can query one of our models to get text restorations and visualise the attention weights.

Furthermore, the PHI-ML dataset and processing pipeline have been published on GitHub, so that any researcher may regenerate PHI-ML and train new models offline. Follow the instructions below under the section "Pythia offline".

References

Article available at:

Assael, Y., Sommerschield, T., Prag, J. “Restoring Ancient Text Using Deep Learning: A Case Study on Greek Epigraphy.” In Empirical Methods in Natural Language Processing (EMNLP - IJCNLP 2019). Association for Computational Linguistics. 6369-6376.

When using any of this project's source code, please cite:

@inproceedings{assael2019restoring,
  title={Restoring ancient text using deep learning: a case study on {Greek} epigraphy},
  author={Assael, Yannis and Sommerschield, Thea and Prag, Jonathan},
  booktitle={Empirical Methods in Natural Language Processing},
  pages={6369--6376},
  year={2019}
}

Pythia offline

The following snippets provide references for regenerating PHI-ML and training new models offline.

  • Dependencies
pip install -r requirements.txt && \
python -m nltk.downloader punkt
  • PHI-ML dataset generation
# Download PHI (this will take a while)
python -c 'import pythia.data.phi_download; pythia.data.phi_download.main()'

# Process and generate PHI-ML
python -c 'import pythia.data.phi_process; pythia.data.phi_process.main()'

Preprocessed PHI-ML uploaded by @Holger.Danske800: link

  • Training
python -c 'import pythia.train; pythia.train.main()'
  • Evaluation
python -c 'import pythia.test; pythia.test.main()' --load_checkpoint="your_model_path/"
  • Docker execution
./build.sh
./run.sh <GPU_ID> python -c 'import pythia.train; pythia.train.main()'

Media

Our research has been featured in blog posts by DeepMind and the University of Oxford, and has been published in articles on The Times, NewScientist, La Repubblica, Ekathimerini, TechCrunch and others.

It has also been presented at EMNLP - IJCNLP 2019 in Hong Kong, at the British School at Rome, the Oxford Epigraphy Workshop, the OIKOS National Research School in Classical Studies, during lectures at Venice Ca' Foscari and Rome La Sapienza, and most recently at the Digital Classicist London Seminar.

Pythia is a partner project of epigraphy.info.

Contact