How-to: use Python tooling for pulling down Zotero data into static webpages
: Deniz Aydin
Minutes: 100
This is intended to be a how-to guide for getting started with the Python toolkit that I’ve developed over the last while in order to satisfy the requirements of the D&I project. The basic workflow is as follows:
- you request (JSON) data down from the API endpoints exposed through Zotero
- you convert the JSON into an HTML
- you format the HTML using BeautifulSoup
- you prettify it further by stripping the whitespace
- you cleanup the intermediate files and rename the output to be used in the bibliography
- finally, you generate the bibliography paage, which bears a table that is sortable by the different columns that it has.
The main script that is run automatically via the build process is run_scripts.py
, which makes calls to the required methods in the following form: imported_module.ClassName.relevant_method()
.
This all-encompassing script is included in the build file, so if you do ant
you should get the relevant webpages generated. You should have as many websites (.html files) generated under /data
as there are entries in Zotero. You should also get a biblio.html
. Then, when you are ready, you should run your local pyserve.py
and visit the biblio.html. You can then click on individual entry IDs to be directed to the entry pages.