In this, our last Friday Feature of the academic year, we have nine opportunities to share. Have a wonderful summer, and happy DH-ing!
1. Deadline: 13 June, 2017
An absolutely critical project is starting up at HU Berlin to begin exploiting annotated corpora to rebuild the way our students can learn Latin. There is no greater challenge before Latin or any other historical language — this is where our best people should devote their energies if we are to survive. The use of corpora in smart ways is essential to any strategy. Latin has the biggest student population (>600,000 in Germany last time I checked) but the same methods are relevant to Greek and all historical languages.
This is a big opportunity for Latin and other languages — and we can’t expect this level of funding in the US and probably no place outside of Germany.
Clearly this position requires a high level of German. More information available here.
2. Deadline: 30 June, 2017
The European Association for Digital Humanities (EADH) seeks applicants for one Communications Fellowship.
Working together with the Communication Coordinator, the fellow will write news releases, maintain EADH’s website, update its slider with new project descriptions, and disseminate news through our social media channels. The fellow should anticipate spending approximately 2-3 hours per week on the position. The fellowship comes with a small annual stipend of € 600 (£ 500). As the selected candidate will start working in the middle of the year the stipend for the first year will be € 300 (£ 250). The role is well suited for young scholars and academic professionals who wish to develop deeper knowledge of digital humanities in Europe and gain professional experience in social media and communications.
Desired skills include:
– attention to detail
– some knowledge of digital humanities communities in Europe
– excellent written communication skills in English and in a second
– experience creating and publishing content (Drupal or WordPress)
– experience with social media platforms (Twitter and Facebook)
– skills in graphic design (Photoshop, GIMP)
To apply, submit a CV or résumé and a cover letter describing your interest in and qualifications for the position to Antonio Rojas Castro, Communication Coordinator (email@example.com).
Read the announcement online:
3. Deadline: 30 June, 2017
MEASURE, MODEL, MIX: COMPUTER AS INSTRUMENT
2017 SIGCIS Conference
Philadelphia, Pennsylvania | October 29, 2017
The Special Interest Group in Computing, Information, and Society [SIGCIS] welcomes submissions to their annual conference
Computers are instruments of action. They are made to measure, model, and mix; count and aggregate; save and surveil; pick, parse, and select; and in a world of embedded systems, they are even designed to listen, wait, and relay. In many instances, these actions involve the computational transformation of other social and technological processes: from software that compiles the census to the suites of code assisting in the digital manipulation of sound and image. In other cases, computers register and create information at scales and speeds we have only begun to grasp: artificial intelligence, machine learning, and “big data” in all its local forms. And while often leveraged as democratizing, computers have long been known to amplify structural inequality, map over difference, and jettison “noise” that cannot be translated into a specific form of information.
Measure, Model, Mix invites scholars and independent researchers across the disciplinary spectrum to explore the historical conditions of computation.
Areas of engagement may include:
– How have bureaucratic, scientific, and aesthetic computational instruments eroded, produced, and reproduced biopolitical and epistemological realities, past and present?
– How can we analyze the relationships between computing and identity categories such as race, gender, sexuality, and ethnicity?
– What are the historical foundations of computing’s contemporary capacity to recognize information?
– How have cultures, subcultures, political systems and identity groups mobilized computational techniques for their own ends?
SIGCIS is especially welcoming of new directions in scholarship. We maintain an inclusive atmosphere for scholarly inquiry, supporting both disciplinary and theoretical interventions from beyond the traditional history of technology, and with respect to promoting diversity in STEM. We welcome submissions from: histories of technology, computing, and science; science and technology studies; studies of women, gender, and sexuality; studies of race, ethnicity, and postcoloniality; film, media, and game studies; software and code studies; network and internet histories; music, sound studies, and art history; and all other applicable domains.
The annual SIGCIS Conference begins immediately after the regular annual meeting of our parent organization, the Society for the History of Technology [SHOT]. SIGCIS welcomes everyone, inclusive of gender identity and expression, sexual orientation, ability, age, appearance, race, nationality or religion. We are committed to fostering a positive, productive space for all participants.
SIGCIS welcomes proposals for individual 15-20 minute papers, 3-4 paper panel proposals, works-in-progress (see below), and non-traditional proposals such as roundtables, software demonstrations, hands-on workshops, etc.
We are pleased to announce a new format for the 2017 SIGCIS Works in Progress (WiP) session. This year, participants will not deliver presentations on their WiP, and there will not be an audience. Instead, the session will serve as a workshop wherein participants will discuss the works in small group sessions.
We invite works in progress — articles, chapters, dissertation prospectuses — of 10,000 words or less (longer works must be selectively edited to meet this length). We especially encourage submissions from graduate students, early career scholars, and scholars who are new to SIGCIS. Authors who submit a WiP will also commit to reading (in advance) two other WiPs, discussing them in a very small group setting, and providing written feedback on one of those WiPs. Scholars who would like to participate in this session without submitting their own WiP are certainly welcome; we ask that they commit to reading (in advance) at least two of the WiPs.
Submissions for WiP only require a 350-400 word abstract, but applicants should plan to circulate their max-10,000-word WiPs no later than October 8, 2017. Scholars who would like to be a reader of WiPs, please email a brief bio or 1-page CV, along with your areas of interest and expertise, to Gerardo Con Diaz [firstname.lastname@example.org].
Submissions are due June 30, 2017. Applicants should download, fill out and follow the instructions on the application cover sheet at http://meetings.sigcis.org/call-for-papers.html. All submissions will require:
– 350-400 word abstract (full panel proposals should additionally include a 300-word panel abstract in addition to 3-4 paper abstracts)
– 1-page CV or resume
Please Note: Individuals already scheduled to participate on the main SHOT program are welcome to submit an additional proposal to our workshop, but should make sure that there is no overlap between the two presentations. However, SIGCIS may choose to give higher priority to submissions from those not already presenting at SHOT. Questions regarding submission procedure should be sent to Kera Allen [email@example.com].
The top financial priority of SIGCIS is the support of travel expenses for graduate students, visiting faculty without institutional travel support, and others who would be unable to attend the meeting without travel assistance. The submission cover sheet includes a box to check if you fall into one of these categories and would like to be considered for an award. These is no separate application form, though depending on the volume of requests and available resources we may need to contact you for further information before making a decision.
Any award offered is contingent on registering for and attending the SIGCIS Conference. Please note that SHOT does not classify the SIGCIS Conference as participation in the SHOT annual meeting, therefore so acceptance by SIGCIS does not imply eligibility for the SHOT travel grant program.
Details of available awards are at *http://www.sigcis.org/travelaward* http://www.sigcis.org/travelaward .
4. Deadline: 5 July, 2017
The University of Graz (Austria) is offering a Tenure-track professorship in Digital Humanities with a focus on Museology.
For details please see: http://jobs.uni-graz.at/en/KS/7/99/3588
– initially with a limited term of 6 years as Assistant Professor with Qualification Agreement.
– career goal is a transition to an open-ended employment relationship as Associate Professor
– 40 hours per week
– to be occupied in winter semester 2017/18
– reference number: KS/7/99 ex 2016/17
If you are interested, please submit your application documents in accordance with the general application guidelines (which can be found at http://jobs.uni-graz.at/Auswahlverfahren-Laufbahnprofessuren) within the deadline. Your application documents should include the reference number of the position and be sent by email to: firstname.lastname@example.org
5. Deadline: 10 July, 2017
Workshop on Language, Ontology, Terminology and Knowledge Structures (LOTKS – 2017)
In conjunction with the 12th International Conference on Computational Semantics (IWCS), 19th September, 2017 Montpellier (France)
This workshop, the second of a joint series, will bring together two closely related strands of research. On the one hand it will look at the overlap between ontologies and computational linguistics; and on the other the relationship between knowledge modelling and terminologies — as well as the many points of intersection between these two topics.
Languages and Ontologies:
Formal ontologies are taking on an increasingly important role in computational linguistics and automated language processing. Knowledge models and ontologies are of interest to several areas of NLP including, but not limited to, Machine Translation, Question Answering, and Word Sense Disambiguation. At a more abstract level ontologies can help us to model and reason about natural language semantics. They can be also used for the organisation and formalisation of linguistically relevant categories such as those used in tagsets for corpus annotation. At the same time, the fact that formal ontologies are being increasingly accessed by users with a limited or with no background in formal logic has led to a growing interest in the development of front ends that allow for the easy editing, querying and summarisation of such resources; it has also led to work in developing natural language interfaces for authoring and for evaluating ontologies. Another area that is now beginning to receive more attention is the application of ontologies and taxonomies to the annotation and study of literary texts, as well as of texts more generally in the humanities. This is closely related to the ontology enhanced modelling of lexicographic resources, another topic which is gaining in popular.
This brings us to the field of terminology as a linguistic field, where in recent years there has been a shift from merely compiling specialized lexicographic resources to exploring terminology as a tool for structuring knowledge in a given domain. As such, this has led to more intelligent ways of accessing, extracting, representing, modelling, visualising and transferring knowledge. Numerous tools for the automatic extraction of terms, term variants, knowledge-rich contexts, definitions, semantic relations, and taxonomies from specialized corpora have been developed for a number of languages and new theoretical approaches have emerged as potential frameworks for the study of specialized communication. However, the building of adequate knowledge models for practitioners (e.g. experts, researchers, translators, teachers etc.), on the one hand, and for use by NLP applications (including cross-language, cross-domain, cross-device, multimodal, multi-platform applications) on the other, still remains a challenge. LOTKS will provide a forum for discussion on how to best bridge these two sets of requirements.
Motivation and Topics of Interest
This workshop welcomes contributions from researchers in fields such as linguistics, terminologies, and knowledge engineering, whose work fits in with our topics of interest as well as interested industry professionals. Building on the success both of the 1st LangandOnto workshop (co located with ICWS 2015) as well as last year’s joint LangandOnto/TermiKS workshop (co-located with LREC 2016), this workshop aims to create a forum for open discussion that will help to highlight the common areas of interest in the different fields concerned, as well as fostering dialogue between the various different approaches taken by each discipline. And therefore we particularly welcome approaches with a cross-language, cross-domain and/or cross-interdisciplinary scope.
Topics of interest include but are not limited to:
— NLP-driven ontology modelling
— The use of ontologies to structure linguistic tagsets
— Natural language interfaces to ontologies
— Ontologies for NLP tasks (e.g. textual entailment, summarisation, word
sense disambiguation) and Information Retrieval
— Lexical Ontologies
— The use of ontologies in analysing/studying literary texts
— Ontology-driven natural language generation
— Linguistic, cognitive, psycholinguistic, sociolinguistic, computational
and hybrid approaches to knowledge modelling
— Construction of terminological knowledge bases
— Terminology modelling for MT
— Knowledge extraction from user-generated content
— Frame-based approaches to knowledge extraction and representation
— Building knowledge resources for less-resourced domains and languages
— Visual components of specialized knowledge bases
— Visualisation techniques for knowledge representations
— Term variation and knowledge representations
— NLP applications for terminology management
— Terminologies in the Digital Humanities
We invite proposals in the form of abstracts of up to 6 pages (up to 4 pages of text +2 pages for references) for short papers, or up to 8 pages (up to 6 pages of text+ 2 pages for references) for long papers. Accepted workshop papers will be published together with the general program papers.
Follow the formatting guidelines for the IWCS general program, which can be found at: https://www.lirmm.fr/iwcs2017/iwcs_instructions.php
Submission via Easychair at https://easychair.org/conferences/submission_show_all.cgi?a=14733768
Camera ready – Requirements
Final paper format: up to 10 pages (8 pages of text + 2 of references).
Accepted workshop papers will be published together with the general program papers.
Paper submissions due: 10th July 2017
Paper notification of acceptance: 31st July 2017
Camera-ready papers due: 4th September 2017
Workshop: 19th September 2017
For all enquiries please contact: email@example.com
6. Deadline: 15 July, 2017
4th Workshop on Computational History (HistoInformatics2017) –
November 6, 2017, Singapore
Held in conjunction with the 26th ACM International Conference on Information and Knowledge Management (CIKM 2017), 6-10 November, Singapore.
The HistoInformatics workshop series brings together researchers in the historical disciplines, computer science and associated disciplines as well as the cultural heritage sector. Historians, like other humanists show keen interests in computational approaches to the study and processing of digitized sources (usually text, images, audio). In computer science, experimental tools and methods stand the challenge to be validated regarding their relevance for real-world questions and applications. The HistoInformatics workshop series is designed to bring researchers in both fields together, to discuss best practices as well as possible future collaborations.
Traditionally, historical research is based on the hermeneutic investigation of preserved records and artefacts to provide a reliable account of the past and to discuss different hypotheses. Alongside this hermeneutic approach historians have always been interested to translate primary sources into data and used methods, often borrowed from the social sciences, to analyze them. A new wealth of digitized historical documents have however opened up completely new challenges for the computer-assisted analysis of e.g. large text or image corpora. Historians can greatly benefit from the advances of computer and information sciences which are dedicated to the processing, organization and analysis of such data. New computational techniques can be applied to help verify and validate historical assumptions. We call this approach HistoInformatics, analogous to Bioinformatics and ChemoInformatics which have respectively proposed new research trends in biology and chemistry. The main topics of the workshop are:(1) support for historical research and analysis in general through the application of Computer Science theories or technologies, (2) analysis and re-use of historical texts, (3) analysis of collective memories, (4) visualizations of historical data, (4) access to large wealth of accumulated historical knowledge.
HistoInformatics workshops took place thrice in the past. The first one (http://www.dl.kuis.kyoto-u.ac.jp/histoinformatics2013/) was held in conjunction with the 5th International Conference on Social Informatics in Kyoto, Japan in 2013. The second workshop (http://www.dl.kuis.kyoto-u.ac.jp/histoinformatics2014/) took place at the same conference in the following year in Barcelona. The third workshop (http://www.dl.kuis.kyoto-u.ac.jp/histoinformatics2016/) was held on July 2016 in Krakow, Poland in conjunction with ADHO?s 2016 Digital Humanities conference.
For Histoinformatics2017, we are interested in a wide range of topics which are of relevance for history, the cultural heritage sector and the humanities in general. Topics of interest include (but are not limited to):
-Natural language processing and text analytics applied to historical documents
-Analysis of longitudinal document collections
-Search and retrieval in document archives and historical collections, associative search
-Causal relationship discovery based on historical resources
-Named entity recognition and disambiguation
-Entity relationship extraction, detecting and resolving historical references in text
-Finding analogical entities over time
-Computational linguistics for old texts
-Analysis of language change over time
-Digitizing and archiving
-Modeling evolution of entities and relationships over time
-Automatic multimedia document dating
-Applications of Artificial Intelligence techniques to History
-Simulating and recreating the past course of actions, social relations, motivations, figurations
-Handling uncertain and fragmentary text and image data
-Automatic biography generation
-Mining Wikipedia for historical data
-OCR and transcription of old texts
-Effective interfaces for searching, browsing or visualizing historical data collections
-Studies on collective memory
-Studying and modeling forgetting and remembering processes
-Estimating credibility of historical findings
-Probing the limits of Histoinformatics
-Epistemologies in the Humanities and Computer Science
Paper submission deadline: July 15, 2017 (23:59 Hawaii Standard Time)
Notification of acceptance: August 12, 2017
Camera ready copy deadline: August 19, 2017
Workshop date: November 6, 2017
Submissions need to be:
– formatted according to ACM camera-ready template (http://www.acm.org/publications/proceedings-template).
– submitted in English in PDF format at the workshops Easychair page (https://easychair.org/conferences/?conf=histoinformatics2017)
Full paper submissions must describe substantial, original, completed and unpublished work, not accepted for publication elsewhere, and not currently under review elsewhere. Long papers may consist of up to eight (8) pages of content including references and figures. Short paper submissions must describe small and focused contribution. Short papers may consist of up to four (4) pages (including references and figures). Accepted papers will be published on CEUR Workshop Proceedings (http://ceur-ws.org/).
7. Deadline: 1 August, 2017
10th Annual Lawrence J. Schoenberg Symposium on Manuscript Studies in the Digital Age
November 2-4, 2017
In partnership with the Rare Book Department <https://libwww.freelibrary.org/rarebooks/index.cfm> of the Free Library of Philadelphia, the Schoenberg Institute of Manuscript Studies (SIMS http://schoenberginstitute.org/ ) at the University of Pennsylvania Libraries is pleased to announce the 10th Annual Lawrence J. Schoenberg Symposium on Manuscript Studies in the Digital Age.
Despite the linguistic and cultural complexity of many regions of the premodern world, religion supplies the basis of a strong material and textual cohesion that both crosses and intertwines boundaries between communities. This year’s theme, “Intertwined Worlds,” will highlight the confluence of expressions of belief, ritual, and social engagement emerging in technologies and traditions of the world’s manuscript cultures, often beyond a single religious context. It will consider common themes and practices of textual, artistic, literary, and iconographic production in religious life across time and geography, from ancient precedents to modern reception and dissemination in the digital age.
For more information, go to: http://www.library.upenn.edu/exhibits/lectures/ljs_symposium10.html . Registration opens in August.
8. Deadline: 20 August, 2017
*** Digital Culture Seminar in Pisa – Call for Proposals ***
The Digital Culture Seminar (http://www.labcd.unipi.it/seminario/ ) is a seminar course, coordinated by Enrica Salvatori and Maria Simi, compulsory for all the students of the Master Degree in Digital Humanities of the University of Pisa. It consists of 18-20 seminars, lasting 2 hours each, on relevant topics for Digital Humanities, held by scholars and experts from research institutions or professionals from companies operating in this field. It wants to be an opportunity to deepen the discipline and direct students to work and research in the Digital Humanities.
The course takes place throughout the academic year with a weekly meeting, typically on Wednesday at 2:00 pm.
In order to organize the course of the next academic year (September 2017-May 2018), scholars and professionals of the Digital Humanities are invited to propose themes and lectures using the form at http://www.labcd.unipi.it/seminario/proponi-un-seminario/
Digital Culture – Digital Libraries and Archives – Electronic Publishing- Digital art, graphics, design – 3D modeling, virtual environments – Web design and programming – Digital History – Computational linguistics – E-learning – Web marketing, e-commerce – Distant reading – Geographical Information Systems – Big Data – Intangible cultural heritage
Requests will be evaluated by the organizers (Maria Simi and Enrica Salvatori) and the selected speakers will then be notified privately. Expenses are reimbursed.
9. Deadline: 9 September, 2017
We are very pleased to announce the 5th conference of DH in the German speaking regions: “Kritik der digitalen Vernunft” / “Critique of digital reason”. The conference will take place in Cologne, 26th February to 2nd March.
The official language of the conference is German, but papers, posters and presentations in other languages are welcome. Please find the Call for Papers on the conference website at http://dhd2018.uni-koeln.de/call-for-papers/
By Rachel Rochester and Heidi Kaufman
This is my final blog post for the DH Blog here at UO, and our last installment of our series on the DiRT Directory. I’m very pleased to have been able to spend the year writing about the digital humanities, and I look forward to staying involved next year, even though I’ll be back in the classroom! But today, I wanted to wrap up with my final thoughts on the DiRT Directory and DH in general.
Last week I promised to reveal my final visualizations, and how a little practice helped me make the most of the directory’s resources. If you haven’t read our first two entries on DiRT, check them out here and here.
After realizing that HT-Bookworm wasn’t the right tool for my project, I went back to the drawing board. I’m working with a relatively small sample set – a handful of South Asian postcolonial authors and their bodies of contemporary work – and it became apparent that it made the most sense to mine their data manually. I own all of the texts as e-books already, since that’s my preferred reading method and many of the texts are not readily available in the U.S., so it was easy enough to search each one for keywords. I started with “pollution” and its variants (pollute, polluted, polluting… you get the idea). My thought was that once I had a decent quantity of data, I would find a graphing tool in the directory and make a really cool visualization.
Once I organized my findings in Microsoft Excel, I started searching for an appropriate digital tool. This time I used the search bar and typed in “graphing,” since I had a pretty good sense of what I wanted to create. The search returned three options: Excel, Gephi, and Ptolemaic. Excel felt too boring – I wanted flash! And Ptolemaic is, according to the description, “a computer application for music visualization” which wouldn’t work for this at all. Instead I turned to Gephi, which is described as “graphing software that provides a way to explore data through visualization and network analysis.” It sounded ideal.
When I downloaded the application, however, I felt instantly lost. I could import a CSV file, but my data wasn’t complete or organized appropriately. I could add nodes manually, but that, too, felt confusing. What was a node? I realized I needed more to get started, and watched a series of YouTube tutorials.
Gephi graphs are beautiful, and are a useful way to analyze complex data. Take, for example, this graph of the Modernist journal The New Freewoman, showing the network and communities of authors involved in its production.
But after looking at the visualizations, I realized that my data isn’t there yet, and would best be served by a simple bar graph, easily made in Excel or Numbers.
There’s nothing flashy about this visualization, and it probably won’t stay this way. For now, I need a simple, visual way to keep track of my simple raw data while I’m gathering it, and this works best for me. But I don’t regret the time I spent exploring the DiRT Directory, and I’m grateful to have discovered Gephi. Once I have a better sense of the nodes and networks I’m trying to assemble, I’ll probably revisit it, and now I have a sense of what type of data I would need to do so.
Sometimes, digital humanities projects get mired down with flashy tools and new technologies. This experience taught me that it’s not necessary, and can even inhibit the natural line of your inquiry. Above all else, the project and data should drive the technologies used in any DH project. I’m glad I realized that and stopped trying to force something to work where it simply shouldn’t. That said, I look forward to the day a project offers me the opportunity to play with some of the flashier tech on the DiRT Directory, and when that day comes, I’ll know where to learn all about it.
This week we have 18 exciting DH opportunities to share.
1. Deadline: 7 June, 2017
The University of Victoria is seeking a Digital Scholarship Librarian. This position is situated within the Digital Scholarship and Strategy unit of UVic Libraries and reports to the Head, Library Systems. The full job description can be found at http://www.uvic.ca/library/use/info/jobs/documents/DigitalScholarshipLibrarian_PositionDescription_2017.pdf
For more information on negotiated benefits, see: https://www.uvic.ca/vpacademic/faculty/benefits/
Please submit a cover letter, CV, and names of three (3) references by noon, June 7, 2017 to: Jonathan Bengtson, University Librarian, University of Victoria Libraries, firstname.lastname@example.org
2. Deadline: 7 June, 2017
CFP: Advancing Linked Open Data (LOD) in the Humanities
Monday, August 7, 2017 @DH2017, Montreal
This is a call for participation in a half-day workshop on Advancing Linked Open Data (LOD) in the Humanities that will take place on the on August 7, 2017, one day prior to the start of Digital Humanities 2017, in Montreal, Canada. The workshop seeks to bring together a wide selection of LOD scholars, researchers, and advocates to share ideas for future LOD tools or initiatives.
Prospective participants should submit the following:
A summary (500-word maximum) of your work in LOD to date, with an emphasis on current projects, including a statement of the institutional position and affiliation of the submitter(s), if relevant.
A position paper (500-word maximum) that outlines gaps or opportunities related to current LOD tools and/or suggests ideas for new ways to take advantage of the growing body of LOD in the humanities.
Submission will be via a Google form by June 7th: https://goo.gl/forms/jOvqfgLEx
The submission form requests permission to make your submission part of an openly available online resource with a CC-BY-NC licence. Projects or researchers unable to participate are invited to submit a summary for inclusion in this resource (see below).
Successful submissions will be shared with all participants in advance of the conference. Participants will rank the position papers with a view to their potential to advance work in the field if taken up by the LOD community.
The authors of the four top-ranked proposals will be asked to present a short pecha-kucha-style talk to kick off the workshop. After a short discussion period, participants will then divide into working groups to strategize about how the ideas might be advanced and come back to the larger group with next steps. All participants will regroup for a final discussion and future planning.
3. Deadline: 10 June, 2017
Beyond Editing: Advanced Solutions and Technologies
Prague: 4-8 Sept. 2017
Organised by the Faculty of Arts, Charles University Prague, and CNRS CIHAM UMR 5648, with the support of DARIAH’s Humanities at Scale programme
Call for Applications
This Summer School targets Humanities scholars, librarians and students who have already acquired a working knowledge of digital scholarly editing, especially TEI encoding, and wish to go further. If the encoding is a crucial step, that translates the modelisation of a text or document into a computer-readable form, scholars need to put this encoding to good use by displaying, processing and analysing it. To this end, it is necessary to master other technologies, which are often more difficult to learn, with much rarer training opportunities.
During this week-long school, the participants will learn how to display, transform and process a scholarly XML edition, with the aim of becoming able to work on their own editions with the latest digital methods.
The week will be organised as follows:
a “main course” (all the morning sessions), centered on XSLT (eXtensible Stylesheet Language Transformation), a powerful language especially designed to work with XML; a few “sides”, or workshops, will be offered during the afternoon sessions, to introduce the participants to more specialised technologies and solutions allowing them to enhance a scholarly edition (geographical data, linguistic tools, network analysis, etc.) We invite applications from scholars, students (Master level and beyond), librarians, archivists and other research professionals involved in the production and valorisation of scholarly digital editions. The selection committee particularly invites applicants from Central and Eastern Europe.
The participation in the Summer school programme is free, and in addition, selected applicants will receive a bursary: DARIAH?s Humanities at Scale programme will cover the cost of their travel and accommodation, up to a maximum of 500 EUR (participants will be refunded, up to 500 EUR, after the training school and upon presentation of the receipts).
The Summer school will be hosted by the Faculty of Arts, Charles University, in the historical centre of Prague.
How to apply?
To apply, please fill in this online form: https://goo.gl/forms/lqUd5O5BnLzB6ZOH3
Applications are welcome until 10 June 23:00 GMT
4. Deadline: 13 June, 2017
Job Opportunity at The National Archives
Head of Digital Research
About the role
The National Archives has set itself the ambition of becoming a digital archive by instinct and design. The digital strategy takes this forward through the notion of a disruptive archive which positively reimagines established archival practice, and develops new ways of solving core digital challenges. You will develop a research programme to progress this vision, to answer key questions for TNA and the Archives Sector around digital archival practice and delivery. You will understand and navigate through the funding landscape, identifying key funders (RCUK and others) to build relations at a senior level to articulate priorities around digital archiving, whilst taking a key role in coordinating digitally focused research bids. You will also build key collaborative relationships with academic partners and undertake horizon scanning of the research landscape, tracking and engaging with relevant research projects nationally and internationally. You will also recognise the importance of developing an evidence base for our research into digital archiving and will lead on the development of methods for measuring impact.
As someone who will be mentoring and managing a team of researchers, as well as leading on digital programing across the organisation, you’ll need to be a natural at inspiring and engaging the people you work with. You will also have the confidence to engage broadly with external stakeholders and partners. Your background and knowledge of digital research, relevant in the context of a memory institution such as The National Archives, will gain you the respect you need to deliver an inspiring digital research programme. You combine strategic leadership with a solid understanding of the digital research landscape as well as the tools and technologies that will underpin the development of a digital research programme. You will come with a strong track record in digital research, a doctorate in a discipline relevant to our digital research agenda, and demonstrable experience of relationship development at a senior level with the academic and research sectors.
Join us here in beautiful Kew, just 10 minutes walk from the Overground and Underground stations, and you can expect an excellent range of benefits. They include a pension, flexible working and childcare vouchers, as well as discounts with local businesses. We also offer well-being resources (e.g. onsite therapists) and have an on-site gym, restaurant, shop and staff bar.
To apply please follow the link: https://www.civilservicejobs.service.gov.uk/csr/jobs.cgi?jcode=1543657
Closing date: Tuesday 13th June at Midnight
5. Deadline: 15 June, 2017
Letters 1916-23 is delighted to announce three job openings: two postdocs and one research assistant. This is a unique opportunity to join a vibrant public engagement project as we enter a new phase of research.
Funding from the Irish Research Council is allowing the project to expand its scope through 1923, covering the Anglo-Irish War, Irish independence, and the Irish Civil War. It is also funding the construction of a new technical framework, from ingestion of new letters to publication to new modalities of text analysis and visualisation.
Be part of one of the most successful crowdsourcing projects in the digital humanities. Further details are available here:
For an informal conversation please contact
6. Deadline: 15 June, 2017
ERC “NOSCEMUS – Nova Scientia: Early Modern Science and Latin” / Ludwig Boltzmann Institute for Neo-Latin Studies
Advertisement of a position for an information scientist (MA) / a philologist (MA) with excellent IT skills
The ERC Advanced Grant programme “NOSCEMUS – Nova Scientia: Early Modern Science and Latin” led by Martin Korenjak (Univ. of Innsbruck) and the Ludwig Boltzmann Institute for Neo-Latin Studies (Innsbruck) led by Florian Schaffenrath are advertising one position for an information scientist (MA, 50%, 01/10/2017–30/09/2022).
Context, tasks and working conditions
The Ludwig Boltzmann Institute for Neo-Latin Studies (LBI) is among the biggest research organisations dedicated exclusively to the study of early modern Latin worldwide. The ERC Advanced Grant programme NOSCEMUS is a five-year project funded by the European Union aiming at a reassessment of the role of Latin in early modern natural science. Both entities will work in close cooperation.
In this context, the main tasks of the information scientist will include the following:
- establishment, development and management of a database for early modern authors, texts and secondary literature (including back-end and front-end)
- digitisation, conversion into machine readable formats and online presentation of early modern texts in cooperation with the Institute for Digitisation and Electronic Archiving of the Univ. of Innsbruck (DEA)
- management of the homepages of the LBI and NOSCEMUS
- preparation of long-term storage of the results of the LBI and NOSCEMUS
The gross salary will be at least € 1365,50 per month (14 times).
Candidates should have a MA in informatics / a MA in classics combined with extraordinary IT skills. A background in the humanities or a strong interest in this field is important since the main challenge of the work will consist in adapting electronic tools to the needs of projects from this area. For the same reason, a strong capacity for teamwork is required. Candidates must be fluent in English in word and writing.
Applications should be sent, together with a CV and a letter of motivation, by email to Martin Korenjak (email@example.com).
For further information, please contact Martin Korenjak.
7. Deadline: 15 June, 2017
The Oxford Internet Institute is looking for a full-time Postdoctoral Researcher to work on the ethical challenges posed by digital technologies (digital ethics).
The Postdoctoral Researcher will be a member of the Digital Ethics Lab, will elaborate new analyses and hypotheses, review the literature, and publish the results, in collaboration with other members of DELab. The selected candidate will also contribute to the dissemination of the findings through presentations, the organisation of workshops, participation into conferences, and social media.
For more information about DELab and its current projects please see
The position is suited to candidates who have recently completed a doctorate on any relevant discipline, especially philosophy, ethics, law, and sociology. The list is not exclusive and a degree in computer science, AI, machine learning, economics, STS, and geography (this list is only indicative) is also relevant, if combined with a proven interest (e.g., publication, organisation of workshops and panel, and talks) in ethical, legal, or social impact analysis of digital technologies.
For more information about the position and on how to apply please see
8. Deadline: 16 June, 2017
Canada Research Chair: Applied Communication, Leadership, and Culture Tier 2 (SSHRC). The University of Prince Edward Island (UPEI) invites a highly engaged academic to join our research team in the role of Canada Research Chair (CRC) in Applied Communication, Leadership, and Culture. The successful candidate for this position will have a program of research that fits within the broad, interdisciplinary category of the Social Studies of Science; they will have extensive and varied experience with digital humanities tools (including GIS or alternative mapping software), both within their own scholarly work and within the classroom; they will have a strong record of teaching communication and leadership to undergraduate and graduate students, and a clear understanding of how their own academic research intersects with their teaching of these subjects. Preference will be given to those candidates who have developed a research profile that suggests obvious future collaboration with members of the UPEI research community.
Visit the UPEI Human Resources Academic Positions web site for the link to the Canada Research Chair in Applied Communication, Leadership, and Culture posting: http://www.upei.ca/hr/
Review of applications will begin on 16 June 2017 and will continue until a nominee is selected.
9. Deadline: 25 June, 2017
The First International Workshop on Resources and Tools for Derivational Morphology (DeriMo2017) will be held in Milan (Italy) on 5 and 6 October 2017, at the Universit? Cattolica del Sacro Cuore (http://derimo2017.marginalia.it/).
DeriMo2017 concludes the Word Formation Latin (WFL) project, funded by the European Union’s Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No 658332-WFL. The project is based at the Centro Interdisciplinare di Ricerche per la Computerizzazione dei Segni dell Espressione (CIRCSE: http://centridiricerca.unicatt.it/circse-home?rdeLocaleAttr=en), at the Universit? Cattolica del Sacro Cuore, Milan, Italy.
Submissions are invited for presentations featuring high quality and previously unpublished research on the topics described below. Contributions should focus on results from completed as well as ongoing research, with an emphasis on novel approaches, methods, ideas, and perspectives, whether descriptive, theoretical, formal or computational.
Proceedings will be published, open-access, in time for the workshop.
MOTIVATION AND AIMS
Until very recently, in the areas of Language Resources and Natural Language Processing (NLP), derivational morphology has always been neglected if compared to inflectional morphology. Yet the recent rise of lexical resources for derivational morphology have demonstrated that enhancing textual data with derivational morphology tagging can lead to strong outcomes. First, it organises the lexicon at higher level than words, by building word formation based sets of lexical items sharing a common derivational ancestor. Secondly, derivational morphology acts like a kind of interface between morphology and semantics, since core semantic properties are shared at different extent by words built by a common word formation process.
In the lively area of research aimed at building computational resources and tools for ancient languages, the WFL project fills a gap in the variety of those available for Latin, connecting lexical items on the basis of word formation rules. For a work-in-progress version of the resource, please visit http://wfl.marginalia.it.
This workshop wants to be both an opportunity for the presentation of WFL to the wider community, and a place where confrontation with other scholars engaged in the treatment of derivational morphology for different languages (either modern or ancient) can arise, and potentials for the cross-linguistic sharing of techniques and methods can be discussed.
The Workshop on Resources and Tools for Derivational Morphology aims at covering a wide range of topics.
In particular, the topics to be addressed in the workshop include (but are not limited to) the following:
– resources for derivational morphology
– connecting the derivational morphology level of annotation in language resources with other levels of linguistic analysis (e.g. semantic, syntactic?)
– (NLP) tools for the semi-automatic creation of resources for derivational morphology
– (NLP) tools including components of derivational morphology
– empirically based comparative and multilingual studies on derivational morphology
– empirically based diachronic studies on derivational morphology
– query tools for derivational morphology resources
– theoretical issues in derivational morphology.
INSTRUCTIONS FOR SUBMISSION
We invite to submit long abstracts describing original, unpublished research related to the topics of the workshop. Abstracts should not exceed 6 pages (references included). The language of the workshop is English. All abstracts must be submitted in well-checked English.
Abstracts should be submitted in PDF format only. Submissions have to be made via the EasyChair page of the workshop at https://easychair.org/conferences/?conf=derimo2017. Please, first register at EasyChair if you do not have an EasyChair account.
The style guidelines to follow for the paper can be found here: http://derimo2017.marginalia.it/index.php/CfP/authors-kit.
Please, note that as reviewing will be double-blind, the abstract should not include the authors’ names and affiliations or any references to web-sites, project names etc. revealing the authors’ identity. Furthermore, any self-reference should be avoided. For instance, instead of “We previously showed (Brown, 2001)…”, use citations such as “Brown previously showed (Brown, 2001)…”. Each submitted abstract will be reviewed by three members of the programme committee.
The authors of the accepted abstracts will be required to submit the full version of their paper, which may be extended up to 12 pages (references included).
The oral presentations at the workshop will be 30 minutes long (25 minutes for presentation and 5 minutes for questions and discussion).
10. Deadline: 30 June, 2017
The Department of English and Theatre at Acadia University seeks applications for a 9.5-month contractually limited position at the rank of Assistant Professor in Postcolonial Literature to begin 1 August 2017. The successful candidate will teach an upper-level course in the Literature of Australia and New Zealand as well as courses in introductory English, and will be expected to contribute to the intellectual life and service work of the department. More information available here.
11. Deadline: July 9, 2017
Three (3) new postdoctoral researcher positions are now open at HELDIG — Helsinki Centre for Digital Humanities (http://heldig.fi), at the University of Helsinki, Finland:
Further information about the positions can be obtained from Professor Mikko Tolonen, mikko.tolonen at helsinki.fi
12. Deadline: July 15, 2017
Call for Papers: Learning by the Book: Manuals and Handbooks in the History of Knowledge, Conference, Princeton University, 7-10 June 2018
Organized by Angela Creager (Princeton University), Mathias Grote (Humboldt University Berlin), Elaine Leong (MPI for the History of Science, Berlin)
Supported by German Historical Institute, Washington, D.C. and Princeton University (the Center for Collaborative History, the International Fund, and the David A. Gardner ’69 Magic Project in the Humanities Council)
Often overlooked, handbooks, protocols, and manuals are key tools in the making, preserving, and sharing of knowledge. Across editorial offices, artisanal workshops, religious schools, culinary institutes, and biomedical laboratories, instructional and reference texts codify the knowledge of a working community, with an eye to communicating what a new practitioner needs to know. This conference will address how handbooks, protocols, manuals, catalogues and related instructional or reference media have contributed to the standardization, codification, transmission, and revision of knowledge in diverse fields. How are practices and protocols written down, distributed or preserved, and how are objects or processes named, registered or classified? What kind of credit accompanies the development or compilation of methods or reference literature? When and why do certain books become commercially successful or canonical, and others obsolete? How does their circulation relate to the commodification of required materials, or to more informal forms of exchange? Possible fields and sites of scrutiny will range from the alchemical workshop to the 20th century laboratory, or from the maintenance of technologies to medical diagnosis, language acquisition, government regulation, natural history writing or museum inventories, but is by no means restricted to these examples.
We invite proposals from the history of science and knowledge broadly construed as well as from science and technology studies, the history of arts and crafts, the history of the book and media or related fields. Contributions will cover a wide geographical and temporal range ? from antiquity to the 20th century ? in order to sound out, put simply, how knowledge relates to texts, and writing, reading and learning to doing. To broaden the scope of an existing core group of scholars, we are particularly interested in case studies from humanities, technologies, applied sciences or manufacture and industry, as well as in those with a scope reaching beyond North America and Europe. Titles and abstracts of max. 400 words should be sent to firstname.lastname@example.org and email@example.com by July 15th, 2017. We expect to be able to cover transportation and accommodation costs of conference participants.
13. Deadline: August 14, 2017
Lecturer/Asst Professor (tenure-track/tenured)
The Dalhousie School of Information Management (SIM) seeks a dynamic and innovative colleague to join our team. SIM invites applications for a probationary tenure-track, tenure-track or tenured position at the rank of Lecturer or Assistant Professor, commencing January 1, 2018 (negotiable).
This position combines teaching, research, and administrative responsibilities. The School seeks candidates with a strong interest in, and capacity for, interdisciplinary research. Candidates will be expected to teach in at least two programs at the graduate or undergraduate levels. Professional information management experience will be an asset.
The successful candidate will have a PhD (or ABD status) in information management or a related discipline, with research expertise and/or teaching experience in one or more of the following areas:
* Data management: analytics and visualization, curation,
* Information risk management
* New and emerging media
* Other relevant areas including digital transformation,
organizational learning, collaboration, and knowledge management
The SIM (http://sim.management.dal.ca) offers two graduate programs: the American Library Association-accredited Master of Library and Information Studies (MLIS) program, and the mid-career blended learning Master of Information Management (MIM) program. At the undergraduate level, the School provides core and elective courses in the Bachelor of Management program, delivered collaboratively with the three other schools in the Faculty of Management. The School also participates in Dalhousie’s Interdisciplinary PhD program.
The SIM is part of the interdisciplinary Faculty of Management (http://www.dal.ca/faculty/management.html), which also includes the School of Public Administration, the School for Resource and Environmental Studies, and the Rowe School of Business. The Faculty of Management’s mission is to collaboratively advance management knowledge and practice, and its vision is inspiring managerial solutions to transform lives. We seek an additional colleague who will contribute to, and thrive in, this environment.
Dalhousie University (http://www.dal.ca/) is one of Canada’s leading teaching and research universities, with four professional faculties; a Faculty of Graduate Studies; and a diverse complement of graduate programs. Inter-faculty collaborative and interactive research is encouraged, as is cooperation in teaching. Dalhousie University inspires students, faculty, staff and alumni to make significant contributions regionally, nationally, and to the world.
Dalhousie University is located in Halifax, Nova Scotia, Canada. Halifax is a vibrant capital city and is the business, academic, and medical centre for Canada’s east coast.
All qualified candidates are encouraged to apply; however, Canadians and permanent residents will be given priority. Dalhousie University is committed to fostering a collegial culture grounded in diversity and inclusiveness. The university encourages applications from Aboriginal people, persons with a disability, racially visible persons, women, persons of minority sexual orientations and gender identities, and all candidates who would contribute to the diversity of our community.
Applicants should submit a cover letter, curriculum vitae, copies of past teaching evaluations, and statements of teaching philosophy and of research interests. (Each statement should be approximately one page.) Applications must also include a completed Self-Identification Questionnaire, which is available at www.dal.ca/becounted/selfid. Applications should be directed
Ms. Kim Humes
School of Information Management
Kenneth C. Rowe Management Building
6100 University Avenue, Suite 4010
PO BOX 15000
Halifax, NS B3H 4R2
Fax: 902-494-2451 <(902)%20494-2451>
Voice: 902-494-3656 <(902)%20494-3656>
Electronic applications are preferred.
14. Deadline: August 17, 2017
We are happy to announce ISCOL 2017, the Annual Meeting of the Israeli Seminar on Computational Linguistics. ISCOL 2017 will be held on Monday, September 25 in the Computer Science Department of the Hebrew University of Jerusalem, at the Edmond Safra Campus in Givat Ram.
Computational Linguistics and Natural Language Processing are active research and development fields in Israel today, both in academia and industry. ISCOL is a venue for exchanging ideas, reporting on work in progress and established results, forming cooperation, and advancing the collaboration between academia and industry. ISCOL is also a friendly stage for students for their first appearance in this community.
We invite presentations on recent work in all areas of computational linguistics, natural language processing and closely related fields. We accept work underway, provided that it represents recent and original work of interest to our audience.
Please submit your extended abstracts (up to 2 pages, including references) through EasyChair here:
More information can be found here:
15. Deadline: October 8, 2017
Workshop on Corpus-based Research in the Humanities (CRH) with a special focus on space and time annotations —-
** Vienna (Austria) January 25-26, 2018 **
The Workshop on “Corpus-based Research in the Humanities” (CRH) brings together those areas of Computational Linguistics and the Humanities that share an interest in the building, managing and analysis of text corpora. The edition of this year has a specific focus on time and space annotation in textual data, backed by a keynote speaker with special interest in this aspect of corpus management.
The second edition of CRH will be held in Vienna (Austria) on January 25th-26th 2018 and will be hosted Austrian Academy of Sciences, University of Vienna and Technische Universitaet Wien.
The series of the CRH workshops continues that of the workshop on “Annotation of Corpora for Research in the Humanities” (ACRH), the three editions of which were held respectively in 2011 (Heidelberg, Germany), 2012 (Lisbon, Portugal) and 2013 (Sofia, Bulgaria). The first CRH was held in Warsaw (Poland) in 2015.
Submissions of long abstracts for oral presentations and posters (with or without demonstrations) featuring high quality and previously unpublished research are invited on the following TOPICS:
– specific issues related to the annotation of corpora for research in the Humanities (annotation schemes and principles), with special interest in space and time annotations
– corpora as a basis for research in the Humanities
– diachronic, historical and literary corpora
– use of corpora for stylometrics and authorship attribution
– philological issues, like different readings, textual variants, apparatus, non-standard orthography and spelling variation
– adaptation of NLP tools for older language varieties
– integration of corpora for the Humanities into language resources infrastructures
– tools for building and accessing corpora for the Humanities
– examples of fruitful collaboration between Computational Linguistics and Humanities in building and exploiting corpora
– theoretical aspects of the use of empirical evidence provided by corpora in the Humanities
This year, CRH will have a SPECIAL TOPIC concerning time and space annotation in textual data. Submissions with this focus are especially encouraged.
Contributions reporting results from completed as well as ongoing research are welcome. They will be evaluated on novelty of approach and methods, whether descriptive, theoretical, formal or computational.
The proceedings will be published in time for the workshop. They will be co-edited by Andrew Frank, Christine Ivanovic, Francesco Mambrini, Marco Passarotti and Caroline Sporleder.
MOTIVATION AND AIMS
Research in the Humanities is predominantly text-based. For centuries scholars have studied documents such as historical manuscripts, literary works, legal contracts, diaries of important personalities, old tax records etc. Large amounts of such documents exist and are increasingly available in digital form. This has a potentially profound impact on how research is conducted in the Humanities.
Digitised sources allowing scholars to analyse texts quicker and more systematically.
Digital data can also be (semi-)automatically mined: important facts and interdependencies can be detected, complex statistics can be calculated. Analysis of locations and time in documents is often crucial to understand and visualize trends. Results can be visualised and presented to the scholars, who can then delve further into the data for verification and deeper analysis.
Digitisation encourages empirical research, opening the road for completely new research paradigms that exploit `big data’ for humanities research. Digitisation is only a first step, however. In their raw form, electronic corpora are of limited use to humanities researchers. Corpus annotation can build on a long tradition in (corpus) linguistics and computational linguistics but the true potential of such resources is only unlocked if corpora are enriched with different layers of linguistic annotation (ranging from morphology to semantics, including location and time).
The CRH workshop aims at building a tighter collaboration between people working in various areas of the Humanities (such as literature, philology, history, translational studies etc.) and the research community involved in developing, using and making accessible different kinds of corpora. A gap exists between computational linguists (who sometimes do not involve humanists in developing and exploiting corpora for the Humanities) and humanists (who sometimes just aren’t aware that such corpora do exist and that automatic methods and standards to build and use them are today available). Over the past few years a number of historical annotated corpora have been started, among which are treebanks for Middle, Early Modern and Old English, Early New High German, Medieval Portuguese, Ugaritic, Latin, Ancient Greek and several translations of the New Testament into Indo-European languages. The experience of these ever-growing set of projects can provide many suggestions on the methodology as well as on the practice of interaction between literary studies, philology and corpus linguistics.
INSTRUCTIONS FOR SUBMISSION
We invite to submit long abstracts describing original, unpublished research related to the topics of the workshop as PDF. Abstracts should not exceed 6 pages (references included) and written in English.
Submissions have to be made via the EasyChair page of the workshop at https://easychair.org/conferences/?conf=crh2 (requires prior registration with EasyChair).
The style guidelines can be found here: http://www.oeaw.ac.at/forschung-institute/biblio/academiae-corpora/ac/crh2/authors-kit/.
Reviewing will be double-blind; therefore, the abstract should not include the authors’ names and affiliations or any references to web-sites, project names etc. revealing the authors’ identity. Furthermore, any self-reference should be avoided. For instance, instead of “We previously showed (Brown, 2001)…”, use citations such as “Brown previously showed (Brown, 2001)…”. Each submitted abstract will be reviewed by three members of the program committee.
Submitted abstracts can be for oral or poster presentations (possibly with demo). There is no difference between the different kinds of presentation both in terms of reviewing process and publication in the proceedings (the limit of 6 pages holds for both abstracts intended for oral and poster presentations).
The authors of the accepted abstracts will be required to submit the full version of their paper, which may be extended up to 10 pages (references included).
The oral presentations at the workshop will be 30 minutes long (25 minutes for presentation and 5 minutes for questions and discussion).
Depending on the number of submissions, a poster session might be organised as well.
16. Deadline: ASAP
17. Deadline: ASAP
Digital Humanities Developer – Center for Digital Humanities – Princeton University Library
The DH Developer will promote the work of CDH through workshops and other outreach activities including attending national and international conferences on Digital Humanities and relevant technologies.
This position qualifies for 20% R&D time on a project chosen in consultation with the Lead Developer.
- Build, test, debug, and document software designed to support research in the digital humanities. Estimate effort on software projects. Serve as technical lead on CDH projects as appropriate to skills and expertise.
- Hold consultations with members of Princeton community to scope work and suggest technologies for non-CDH project work.
- Teach workshops, write blog posts, and promote the work of CDH to Princeton campus and larger DH communities.
- Work on research and development projects related to pushing the boundaries of digital humanities development. Projects to be chosen in consultation with CDH Lead Developer.
- Knowledge of frontend testing frameworks
- experience with version control
- ability to write clear documentation
- Bachelor’s Degree from a 4-year college or university
- Knowledge of template frameworks and styling tools (such as SASS/Bootstrap/Bourbon)
- Familiarity with Python or another high-level scripting language
- Familiarity with web frameworks such as Django or Ruby on Rails
- Experience with RESTful APIs and various data stores and tools such as: relational databases, XML databases, graph databases; Solr or elasticsearch; RDF and XML
- Experience working on and contributing to open source software projects
- Familiarity with humanities research
Princeton University is an Equal Opportunity/Affirmative Action Employer and all qualified applicants will receive consideration for employment without regard to age, race, color, religion, sex, sexual orientation, gender identity or expression, national origin, disability status, protected veteran status, or any other characteristic protected by law. EEO IS THE LAW
Standard Weekly Hours
Eligible for Overtime
Essential Services Personnel (see policy for detail)
Physical Capacity Exam Required
18. Deadline: ASAP
Summer Assistants-Shakespeare Website
To apply for these student summer project positions, please submit a current resume or CV (curriculum vitae) to Professor Lara Bovilsky via email, at the address provided below. Review begins immediately, but the positions will remain open until filled.
Institution: University of Oregon
Supervisor: Associate Professor Lara Bovilsky (firstname.lastname@example.org)
Prof. Bovilsky is looking for 2-4 assistants to help develop a digital teaching tool. Students would be needed for a period of intensive work and limited project duration from June 17-June 27.
Students will work as much as 40 hours/week during this period, possibly including some weekend times.
Two types of assistant are needed:
(1) Website Development Assistant
Students with DH background or interest: Students with some background in DH and/or web design will help build, expand, and potentially migrate the website.
While anyone with strong interest and some experience in DH may apply, preference will be given to applicants either possessing familiarity with or an interest in web design and in Omeka’s features including, for example, Bulk Metadata Editor, Collection Tree, Digital Object Linker, Exhibit Builder, and Neatline. Ability to pursue self-directed study of Omeka tutorials is required, as the Website Development Assistant will be learning about and employing new plugins. The Website Development Assistant will find and take online tutorials, and share knowledge with Professor Bovilsky and any other collaborators working on the teaching tool. Successful applicants will be detail oriented and able to troubleshoot inevitable technical challenges with grace. The Website Development Assistant may be asked to maintain timely and organized correspondence with libraries and archives as rare books are scanned and their images invested with metadata and incorporated into the teaching tool.
This project based student worker position will pay $14/hour.
(2) Website Content and Copyright Assistant
An additional student or students are needed to help undertake correspondence with copyright holders of materials of interest to obtain publication permissions for the website. A Copyright Assistant will be asked to maintain timely and organized correspondence with libraries, archives, presses, individuals, and other copyright holders to help obtain both images of rare books and other materials and permission to publish those materials on the website.
Preference will be given to applicants with some familiarity with Shakespeare and his reception AND/OR to applicants with experience related to copyright permissions.
This project based student worker position will pay $12/hour.
This project is sponsored by a grant from the Folger Shakespeare Library. The website helps undergraduate and high school students learn about the history of Shakespeare after his death, emphasizing the many historical and literary changes that led to waves of rewriting of Shakespearean texts and that produced radically different assessments and uses of Shakespeare over the centuries. It reflects diverse global understandings and power structures impacting Shakespeare’s use and texts. The tool was piloted in multiple classrooms at three universities during Academic Year 2016-17 and is now being expanded and further developed.
Email Prof. Bovilsky with any questions: email@example.com
By Rachel Rochester and Heidi Kaufman
This week we continue our exploration of the DiRT Directory, but first a quick announcement.
Please join us and University of Oregon Libraries for the last event of the school year, on Wednesday, May 31, from 3-5 p.m., in Knight Library 117 for “OpenRefine for Digital Humanists” with Digital Collections Metadata Librarian Sarah Seymore. Do you have unruly data that you need to clean up? Or, are you looking for tools to assist in extracting data from other sources for your research? Or, do you need to convert your data into another format? Come to the next workshop on Open Refine to find out answers to these mysteries! Open Refine is a tool gaining popularity by digital humanists. It’s used for those who have a data set but need help curating, storing, and using that data in productive ways. This workshop will introduce users to Open Refine’s interface, basic editing and data transformation, and some methods for data enhancement. Refreshments will be served. Please RSVP here: https://goo.gl/forms/eojFamQOpnPq2haw1.
A note from the presenter:
1. Bring your own laptop
2. Download OpenRefine here beforehand: http://openrefine.org/download.html
3. Register for a Google Maps API key here beforehand: https://developers.google.com/maps/faq#new-key
4. Be prepared to follow along!
Also, more information here: http://researchguides.uoregon.edu/c.php?g=595177&p=4516327
If you missed last week’s blog post on The DiRT directory, please check it out here! A story’s never as much fun without a beginning, after all.
As I mentioned last week, I’m working on a project that charts the evolution of environmental concern related to industrialization and development in South Asia. And my goal is to find a way to use one of the tools on the DiRT directory to help me develop this project. I’m going to begin my project with some distant reading, and I’d like to know if authors invoke certain keywords related to climate change and environmental degradation with greater frequency in more recent novels.
On the home page for the DiRT Directory, I’m asked to identify what I need my digital tool to do. For me, this was one of the most challenging steps in this process. As someone who works with words, I could think of many ways to describe my project: I wanted to “Analyze Data” from novels… but I didn’t necessarily want my tool to do the work for me. I wanted to “Find Information” related to literary trends, too. But I ultimately decided that the closest action was to “Capture Information” from a body of writing.
Once I clicked on “Capture,” I was faced with yet more search options. I could select my platform, cost preferences, choose to exclude certain tools, such as software that was in Beta testing or no longer supported, pick which type of license I preferred, pick my research object, and sort and order the results to my preference. Initially, I left nearly all of the search terms blank, hoping that I could come up with a wealth of tools. Platform was easy – I use a Mac computer. My research object was “literature.” But when I entered those search options, I was only offered a single program – Calibre – which is an ebook library management application that didn’t sound like it would help my project much at all. I was disappointed. So I went back, removed my “platform” selection, leaving only my research object criteria, and tried again. This time my search turned up three possibilities, and one, HT-Bookworm, sounded intriguing.
The Directory entry for HT-Bookworm describes it as “a tool that visualizes language usage trends in repositories of digitized texts in a simple and powerful way. It is a tool for culturomic exploration through the observation of chronological trends for words and phrases in large digitized collections of textual documents with metadata facets.” If some of that sounds like Greek to you, and you aren’t a humanist who reads Greek, you aren’t alone. Nevertheless, a tool that could help me look at chronological trends in word usage sounded like a great way to frame my more specific findings regarding the novels and authors I was examining. You can select the dates you’d like Bookworm to search, the metric you’d like it to use, and whether or not the search is case-sensitive: then you just need to input your search term and it instantly generates an attractive line graph. I started exploring.
I reasoned that I should look by words per million from 1750-2015, and I chose to keep the search case insensitive. I started by searching for terms related to vague environmental anxieties: water, air, trash. Based on my reading, I expected these terms to increase in popularity as anxieties surrounding them increased. The graphs consistently followed my expectations into the 1970s, only to plummet past that point. I decided to try terms that I associate with the environmental movement, and which I imagined would have had little traction before the ‘60s and ‘70s. Industrialization, environment, climate, smog, pollution (the program doesn’t seem to allow two-word searches, frustrating my efforts to search for “climate change” or “air quality,” etc.): all followed similar arcs. As revealed by a quick search of DHCommons, one major limitation of Bookworm is that the current iteration only searches 250,000 out-of-copyright volumes, and although later versions of the program intend to “ingest the entire HathiTrust corpus,” including books under copyright, that’s not an option yet. When I confine my searches to 1750-1923, the last year for which most books are out of copyright, the graph reveals a strictly upward trajectory on nearly every term as anticipated.
HT-Bookworm isn’t the perfect tool for my project: I work mostly with contemporary texts, and searching for environmental trends only up to 1923 has limited use for my project. Nevertheless, my experience finding and exploring HT-Bookworm on the DiRT directory helped me realize what I needed to do to find better data, and how I wanted to visualize that data. In next week’s blog post, I’ll reveal my final visualizations, and how a little practice helped me make the most of the directory’s resources.
This week we have seven exciting DH opportunities to share, but first an announcement. Please join us and University of Oregon Libraries on Wednesday, May 31, from 3-5 p.m., in Knight Library 117 for “OpenRefine for Digital Humanists” with Digital Collections Metadata Librarian Sarah Seymore. Do you have unruly data that you need to clean up? Or, are you looking for tools to assist in extracting data from other sources for your research? Or, do you need to convert your data into another format? Come to the next workshop on Open Refine to find out answers to these mysteries! Open Refine is a tool gaining popularity by digital humanists. It’s used for those who have a data set but need help curating, storing, and using that data in productive ways. This workshop will introduce users to Open Refine’s interface, basic editing and data transformation, and some methods for data enhancement. Refreshments will be served. Please RSVP here: https://goo.gl/forms/eojFamQOpnPq2haw1.
A note from the presenter:
1. Bring your own laptop
2. Download OpenRefine here beforehand: http://openrefine.org/download.html
3. Register for a Google Maps API key here beforehand: https://developers.google.com/maps/faq#new-key
4. Be prepared to follow along!
Also, more information here: http://researchguides.uoregon.edu/c.php?g=595177&p=4516327
1. Deadline: May 26, 2017
CFP: The Digitisation Days aim to present an up-to-date vision of the most
recent advances in technology for the digitisation of text, to showcase successful experiences in their application and to explore the challenges for the near future of digitisation. It comprises the DATeCH International Conferencethat is conceived as forum to present, discuss and showcase latest techniques in digitisation and related fields.
The Digitisation Days and the DATeCH International Conference are supported by the Impact Centre of Competence and organised by the Göttingen State and University Library and the University of Göttingen.
The Digitisation Days will take place in Göttingen at the facilities of the Niedersächsische Staats- und Universitätsbibliothek Göttingen, Historical Building (Papendiek 14, 37073 Göttingen) on 1-2 June 2017.
Latest possible registration: 26th May, 2017 11:59 PM
Please, visit http://ddays.digitisation.eu/registration/
2. Deadline: June 11, 2017
3. Deadline: June 30, 2017
CFP: Knowledge Resources for the Socio-Economic Sciences and Humanities
September 7, 2017 – Varna, Bulgaria
The Knowledge Resources for the Socio-Economic Sciences and Humanities workshop will be held in conjunction with the 11th biennial Recent Advances in Natural Language Processing conference (RANLP 2017) which will take place in September 4-8, 2017, in Varna, Bulgaria.
The KnowRSH workshop aims to provide a forum for researchers working on the integration and creation of knowledge resources for Socio-Economic Sciences and Humanities applications. In particular, KnowRSH aims at bringing together NLP researchers with historians, political scientists, philosophers, and researchers from infrastructure communities, such as CLARIN and DARIAH, ISKO and COST ENeL.
The workshop is endorsed by the ACL Special Interest Group on Language Technologies for the Socio-Economic Sciences and Humanities (SIGHUM), DARIAH-EU Working Group for Lexical Resources as well as COST ENeL .
Big cultural heritage data present an unprecedented opportunity for the humanities that is reshaping conventional research methods. However, digital humanities have grown past the stage where the mere availability of digital data was enough as a demonstrator of possibilities. Knowledge resource modeling, development, enrichment and integration is crucial for associating relevant information in pools of digital material which are not only scattered across various archives, libraries and collections, but they also often lack relevant metadata. Within this research framework, NLP approaches originally stemming from lexico-semantic information extraction and knowledge resource representation, modeling, development and reuse have a pivotal role to play.
From the NLP perspective, applications of knowledge resources for the Socio-Economic Sciences and Humanities present numerous interesting research challenges that relate among others to the development of historical lexico-semantic sources and annotated corpora, addressing ambiguity and variation in historical sources and the development of knowledge resources for NLP tool adaptation purposes, using NLP techniques for semantic interlinking, mapping, and integration of existing knowledge resources. Moreover, a recently renewed interest in linguistic linked data approaches to language resources presents both a challenge and an opportunity for NLP researchers working in the Socio-Economic Sciences and Humanities domains, for linking cultural heritage and humanities data sources to linguistic linked data information.
More information available here.
4. Deadline: July 1, 2017
CFP International Conference: CONTOURS OF THE FUTURE: TECHNOLOGY AND INNOVATION IN CULTURAL CONTEXT
1 – 3 November 2017, Saint-Petersburg, Russia
For postindustrial societies the future has turned into a space of risk and construction of expectations. The future exists in the present as a discourse and rhetoric, as a competition of visions and agendas that shape the potential of future innovations. Scenarios of the future are transformative since they direct scientific practices, influence political and economical decisions, and focus stakeholders’ interests.
The conference will highlight processes of knowledge production about technologies of the future as a central sociocultural aspect of technological development. The participants are invited to consider the concept of sociotechnical imaginaries (Sheila Jasanoff) as a set of cultural practices applied by communities in order to construct shared meanings of desired futures – and to reflect on the role of technology in them. These practices involve not only experts such as scientists and engineers, but also politicians, public intellectuals, writers, journalists, artists. Success of innovations and the design of particular technologies depend on the cultural legacy shared by these people, as well as detailed consideration of social, legal, ethical and aesthetic aspects. This perspective emphasizes the ways through which technologies and societies are co-constructed, and how cultural meanings and power relations are embedded in science and technology.
To discuss these questions we invite theorists and practitioners whose work touches upon sociocultural aspects of technological change, including the fields of media and arts, foresight and policy, philosophy and cultural studies, history and sociology, linguistics and communication.
The conference welcomes papers that address the following topics:
* Philosophy of science and technology: The future as an epistemological problem, philosophy of technological utopias
* Methods of future studies: STS (Science and Technology Studies), sociology of expectations, sociotechnical imaginaries; forms of “working with the future” through foresight, strategic planning, scenarios analysis, role playing
* The language of technical change and futurology, history of concepts, descriptions of the unknown
* Sociology of innovation: The politics of the production of novelty and relevance; social and psychological aspects of information and communication technologies as sites for imagined interactions
* Ethical aspects of emerging technologies: Bioethics, roboethics, information ethics
* Aesthetic dimensions of technological change: Science art, news media; representations of science and technology in literature and art, visual images of the future
* Cultural history of technology: Technology and national identity; technologies as media of cultural transfer; sociocultural aspects of users’ interaction with technology; appropriation and domestication of novel technologies
* Digital Humanities: Making the human past fit for future generations
* Archives of the future: Historical experience of forecasting and designing the future, museum exhibits, industrial heritage, industrial archaeology, buried and forgotten futures
Publication: Conference materials (short papers and extended abstracts) will be published prior to the conference.
Working languages of the conference are English and Russian.
Participation in the conference is free of charge.
Organizer: Peter the Great Saint-Petersburg Polytechnic University http://english.spbstu.ru/university/
Co-organizer: Society for the History of Technology http://www.historyoftechnology.org/
* Send an application to firstname.lastname@example.org<mailto:email@example.com> containing the title of the paper, your personal data (name, surname, institutional affiliation, telephone, and e-mail) and a short abstract up to 150 words by 1 July 2017. Participants will be notified on the status of their application by 15 July 2017.
* Full text (10000 to 20000 printed characters) should be sent before 10 September 2017, tables and illustrations may be attached if needed.
5. Deadline: July 8, 2017
FORCE 11 SCHOLARLY COMMUNICATIONS INSTITUTE
What are your goals for the summer?
Do you want your research to reach more people but don’t know how?
Perhaps you need help telling the story of yourself as a scholar and curating your online presence.
Or maybe you need a primer on new forms of publication, how they are assessed, and their role in promotion and tenure?
Or are you curious about open peer review, open annotation, and open access, and their benefits for students and faculty alike?
If so, consider joining us at the FORCE 11 SCHOLARLY COMMUNICATIONS INSTITUTE in seaside La Jolla, CA.
July 31 – August 4, 2017.
University of California, San Diego.
ABOUT THE PROGRAM:
Join colleagues, librarians, and students for an intensive five days of scholarly communication summer camp. Learn about the changing world of academic publishing, the importance of a professional digital presence for yourself and your work, and the advantages of creating publicly accessible scholarship. Take advantage of the opportunities to network with colleagues from your own and other disciplines, learn from librarians and expert faculty in the field, and enjoy inclusive programming and community conversations all week long.
WHY SHOULD I ATTEND?
— To Understand the Changing World of Publishing: From open access to altmetrics, you?ll be thoroughly immersed in new modes and forms of academic publishing and communication.
— To Pass It On: Bring back your newfound knowledge to your peers and students, and help them be better prepared for the job market, whatever their chosen career.
— To Gain a Support Network: Take advantage of brainstorming sessions, plenaries, and downtime to interact with faculty practitioners in the field of scholarly communications and discuss ideas with colleagues who share your interests and concerns.
EARLY-BIRD DEADLINE APPROACHING (save 25%): July 8, 2017
FOR COMPLETE INFORMATION AND TO REGISTER: https://www.force11.org/fsci
6. Deadline: July 16, 2017
CFP: DADH 2017 – The 8th International Conference of Digital Archives and Digital Humanities
Conference Theme: Digital Humanities Evolving: Past, Present, and Future
Venue: National Chengchi University, Taipei
Dates: November 29 – December 1, 2017
7. Deadline: July 20, 2017
The Division of Preservation and Access of the National Endowment for the Humanities will be accepting applications for grants in its Humanities Collections and Reference Resources (HCRR) program, with a deadline of July 20, 2017. With maximum award amounts ranging from $50,000 (planning) to $350,000 (implementation), these grants support projects to preserve and create intellectual access to such collections as books, journals, manuscript and archival materials, maps, still and moving images, sound recordings, art, and objects of material culture. Awards also support the creation of reference works, online resources, and research tools of major importance to the humanities. Eligible activities are wide-ranging, often involving the use of digital methods.
HCRR includes a new opportunity in 2017 to encourage collaboration between smaller and larger institutions. This Partnership/Mentorship Opportunity provides up to $60,000 for planning and pilot-level projects that could help to propel lasting collaborative relationships. These awards might be especially well suited for community-based cultural heritage initiatives but are not limited in geographic or topical scope.
Further details, including links to the application guidelines and other resources, are available via the following Web article<https://www.neh.gov/divisions/preservation/featured-project/new-guidelines-humanities-collections-and-reference-resources-program-now-available>.
By Rachel Rochester and Heidi Kaufman
This week we look into an especially rich resource, the DiRT Directory. But first, a few announcements.
Unfortunately, we have to announce a cancellation. Our “Mapping Tools 101 for Humanists” workshop, scheduled for Wed. May 24, has been canceled. If you’re interested in learning more about mapping tools, please reach out and let us know for next year.
We’re also pleased to announce an addition to the spring calendar. Please join us and University of Oregon Libraries on Wednesday, May 31, from 3-5 p.m., in Knight Library 117 for “OpenRefine for Digital Humanists” with Digital Collections Metadata Librarian Sarah Seymore. Do you have unruly data that you need to clean up? Or, are you looking for tools to assist in extracting data from other sources for your research? Or, do you need to convert your data into another format? Come to the next workshop on Open Refine to find out answers to these mysteries! Open Refine is a tool gaining popularity by digital humanists. It’s used for those who have a data set but need help curating, storing, and using that data in productive ways. This workshop will introduce users to Open Refine’s interface, basic editing and data transformation, and some methods for data enhancement. Refreshments will be served. Please RSVP here: https://goo.gl/forms/eojFamQOpnPq2haw1.
A note from the presenter:
1. Bring your own laptop
2. Download OpenRefine here beforehand: http://openrefine.org/download.html
3. Register for a Google Maps API key here beforehand: https://developers.google.com/maps/faq#new-key
4. Be prepared to follow along!
Also, more information here: http://researchguides.uoregon.edu/c.php?g=595177&p=4516327
When it comes to developing a digital humanities project, sometimes the conception is easy, but the execution is hard. I can think of myriad ways digital tools might help me research, develop, and prove my theses, but I don’t always know how to find the resources to make my vision a reality. That’s where the DiRT Directory comes into play. According to the website, DiRT “aggregates information about digital research tools for scholarly use,” and it makes finding the right digital tool easy even for those of us who aren’t tech savvy. DiRT makes it possible for everyone to be a DIY digital humanist: these tools will make you stronger—and you *should* try them at home!
DiRT showcases tools for almost any DH project you might imagine, and it prioritizes the process of making them user-friendly by pairing each with a jargon-light introduction. I find that many digital tools are inscrutable to humanists just beginning to dabble in DH, at least upon first introduction, because they assume a strong base of technical knowledge. Luckily for me (and you, too!), the DiRT Directory allows users to search for tools based on research problem or data type, and then offers brief summaries on the capabilities of each tool. It’s also possible to narrow your search by inputting your preferred platform (Windows, Mac, etc.), price point, or license (e.g. open source). DiRT is so remarkable in part because the seemingly easy-to-use search criteria can take the guesswork out of exploring DH tools.
To prove how easy DiRT Directory is for beginners, I’m going to spend the next two blog posts showcasing my own journey with a new tool. I’m working on a project that charts the evolution of environmental concern related to industrialization and development in South Asia. For the most part, I’m proving my point by close reading several South Asian novels, but I’d like to frame my close readings within a well-informed overview. I want to know if authors invoke certain keywords related to climate change and environmental degradation with greater frequency in more recent novels. Is there a digital tool that can help me answer this question?
My first challenge is figuring out which tool to select from the long list available in the directory. From there I have to figure out how to use the tool. And finally, I’ll need to learn how to read the results of my experiments. Each week you’ll read the next installment of my efforts. My goal is to prove to you how easy it is to learn how to find and use new DH tools with the directory’s help.
When I first learned about DiRT, I was awe-struck by the wealth of possibilities housed there. There were so many tools I had never heard of, each one with a succinct description of why it could be helpful to an eager and ambitious humanist. Immediately, my mind began whirring with a dozen projects to pursue.
As I learned more about the directory, I noticed more subtle features that caught my attention, too. In our recent podcast, DH scholar Martha Nell Smith noted that “diversity is not a luxury,” in DH, but it is too often segregated into separate streams at conferences or not acknowledged at all. In contrast, DiRT can be accessed in both English and Spanish – an important first step toward including DH scholars from around the world. DiRT is also seeking volunteers to lead translation teams in other languages. And DiRT is clearly a project that inspires volunteer participation: DiRT is made possible by a global coterie of volunteer professors, students, and librarians who contribute content regularly.
In the next installment I’ll cover my tool selection process and my preliminary experiences using it! By showcasing my results I hope to inspire others to tinker and experiment with the wealth of opportunities available on the DiRT Directory.
By Rachel Rochester and Heidi Kaufman
Please join us on Friday, May 5, from 3-5 p.m. in Knight Library 117 for a Podcasting Workshop! Podcasts are a growing genre, and a versatile academic tool. Your humble author, Rachel Rochester, will be joined by Claire Graman (Eng.), and Dr. David Chamberlain (Classics) for a workshop on Podcasting basics and discussion of the podcast form in academia. Refreshments will be served and we hope to see you there! Please RSVP here.
Every year on the Day of DH, scholars interested in the Digital Humanities co-create an international digital event, detailing their projects and what DH means to them. This year’s Day of DH, which took place on April 20, was organized by the Centernet, the International Network of Digital Humanities Centers, with the hosting and technical support of the LINHD: Laboratory of Innovation of Digital Humanities of the UNED (Digital Innovation Lab @ UNED), headed by Elena González-Blanco (UNED) and the collaboration of Humanidades Digitales CAICYT, headed by Gimena del Rio Riande (Conicet, Argentina). This year’s Day of DH saw 123 registered users, nine groups, 92 blogs, 8 discussion forums, and a riot of activity on Twitter.
DH is a field that has no firm definition, which is part of what makes the Day of DH so important. By showcasing a wealth of “typical days” for DH scholars around the world, participants can see the breadth of projects that fit under the DH umbrella.
As I read through various blogs on the official Day of DH page, I realized not only that a vast array of projects can be categorized as DH, but that DH scholars have impressively diverse days. Most of the posts read like to-do lists that range from wildly disparate to thrilling to utterly banal, but they all coalesce into a coherent representation of the scope of digital humanities work. Jessica Dussault, a programmer at the Center for Digital Research in the Humanities (CDRH) at the University of Nebraska-Lincoln (UNL), wrote about starting her morning spreading the word about the redesigned Journals of the Lewis and Clark Expedition website, moving on to researching the UNL marching band’s history and preparing student presentations on the subject for publication, shifting to building an API, and finally thinking about the preservation of endangered data. Georg Vogeler wrote about presenting on state of the art digital editing at a conference. Several scholars and researchers from King’s Digital Lab at King’s College London contributed blog posts about their various research projects. Neil Jakeman wrote about the pleasure in showing project partners the benefits of relational databases. Tiffany Ong wrote about unboxing a new desktop computer screen. Arianna Ciula and Geoffroy Noel wrote about collaborating with the Department of Informatics to develop a way to process multilingual texts in translation. Some of the posts on the cite are elaborate and philosophical, waxing poetic about the future of the field, while others are simple snapshots of the minutes that tick by in a digital humanist’s day. All of it leads us ever closer to an understanding of the field as a whole.
Twitter, as it so often does, captured the essence of the long-form blogs in terser terms. Users used the hashtag #dayofdh2017 to showcase projects, wish other humanists well, look forward to upcoming events like DHSI, and network.
What does DH mean to you? What does a day in your life as a digital humanist look like? We’d love to hear from you in the comments.
By Heidi Kaufman & Rachel Rochester
We are pleased to announce a new workshop co-sponsored by the library and Digital Humanities. Please join us in Knight Library 117 for an Introduction to R & RStudio this Friday, April 28, from 3-5 p.m. R is an open source programming language used by statisticians, data scientist, digital humanists and data designers. This session will introduce users to the basics of R, the RStudio environment and the basics of operating in R. Attendees can install R and RStudio in advance of the session, instructions on how to do so can be found here:
Refreshments will be served. We hope to see you there.
I’m in the middle of writing my dissertation, and it’s a long, convoluted process involving dead ends, false starts, and conceptual growing pains. In recent years the traditional challenges of writing a dissertation have become newly complicated by the introduction of digital tools that invite audience participation and feedback. While many students writing dissertations participate in writing boot camps or dissertation workshops, the introduction of digital tools means that people can comment or become part of a dissertation community in absentia. And this, of course, raises the thorny issue of feedback in light of the dissertation committee. Can just anyone provide helpful feedback on the dissertation? What if the audiences wants me to talk about cute kittens in my chapter on Derrida and my dissertation committee says no? How then might we think about the potential benefits and challenges of public feedback spaces designed or used specifically for the writing of the dissertation? Will this make our jobs easier by providing an audience? Or will it complicate our professional relationships with our advisors? No one wants their thesis to be generic, but it takes daring and audacity to adapt such an institutionally entrenched form. I’m working my way up to it, marshaling my resources, and outlining my last chapters, thinking about the dissertation as a whole and what it contributes to the discourse in postcolonial theory, the environmental humanities, and, of course, the digital humanities.
These kinds of questions led me to Anastasia Salter’s “Hacking the Dissertation,” published in Daniel J. Cohen and Tom Scheinfeldt’s Hacking the Academy: New Approaches to Scholarship and Teaching from Digital Humanities. Salter opens by arguing that she’s come to prefer student work that is “open and collaborative,” and can’t help but feel like the traditional essay is a project’s starting point rather than its goal. If that’s true, she reasons, so too might the dissertation need to be reimagined in the digital age. As it stands, she argues, “It exists, it goes in front of a committee, and mostly it is of vast significance only to the person writing it.” For Salter, more widely spread relevance may come from open-access collaboration in digital forums. The online environments Salter envisions might “become spaces that encourage continual revision, collaboration, and extension” by “offering a community of collaborators—other creators of content who are enthusiastic about sharing their own knowledge and opinions because they are engaged in the same processes for themselves.” And while Salter acknowledges that publishing original work online opens it up to snark, derision, and even vitriol, she suggests that the benefits could well outweigh the risks.
The idea of an ever-present community of scholars willing to offer their expertise is alluring, but it’s also controversial. Theoretically, advisors and committee members are meant to fill that role, offering feedback and guidance gleaned from experience and expertise. A cacophony of online input might threaten to drown out or counter their advice. A dissertation writer new to the field could easily become overwhelmed by competing information. It seems obvious that Salter is not advocating that the digital communities she describes replace institutional systems of support, but it’s worth considering whether online discussion and feedback forums make us stronger writers and scholars. Many writing classrooms, including my own, have used a model of “audience feedback” to help students hone their theses and arguments, but writing instructors carefully monitor and direct those instances of peer input. When my composition students critique each other’s work, I spend a great deal of time offering guidance to be sure both that the feedback is on point and that the students receiving that feedback consider it critically. Similarly, dissertation writing groups serve a similar, analog purpose. But in both of those situations, online feedback is tethered to real-world relationships and shared discourse communities. Online, such checks and balances are much more difficult to implement.
Arguably, an online discourse community may not be much different from the process of soliciting external reviews on book projects under consideration at presses. Reviewers are expected to offer feedback on a work in progress, just as members of a face-to-face dissertation workshop. Perhaps what’s different about the dissertation is that a small group of people—the dissertation committee members–have been designated “official” readers and supporters of the project. Without their approval the dissertation won’t pass and the degree won’t be complete. For better or worse, commentors in an online forum do not have the same kind of power.
In grad school, I’ve often operated under the assumption that more feedback is better. When preparing articles for submission or writing samples, I’ve been coached to have as many eyes on the piece as possible. Nevertheless, I find myself wondering if the open-access model Salter lays out is a populist driver of more desirable scholarship, or a way to lose the thread of an argument under a barrage of input. Can those comments truly open up smarter, more marketable avenues of inquiry? Or does this digital method threaten to undermine dissertators by introducing more information than they can use given the constraints of the committee structure? How public or private should a dissertation-in-progress be?
Please send us your thoughts!
By Rachel Rochester and Heidi Kaufman
We are looking forward to our first event of the term, a Twitter Workshop, taking place this Friday, April 21, from 3-5 p.m. in Knight Library 117. Veteran Tweeters Rachel Tanner and Courtney Floyd of UO’s English Department will share tips and techniques for using Twitter in scholarship and pedagogy. Light refreshments will be served, and the event is free and open to the public. Space is limited, so if you are planning to attend, please RSVP here. We hope you will join us!
Although many of us working in the humanities have long foregrounded space and place into our scholarship, such projects are beginning to coalesce into the emergent, interdisciplinary field of Geohumanities. The “Space, Place, and the Humanities” National Endowment for the Humanities Summer Institute, to be held at Northeastern University in July and August, describes the field as the “intersection of geography, history, literature, creative arts, and social justice.” A vast number of Geohumanities projects are digital in nature, not just because so many digital tools enable innovative geospatial thinking, but also because DH and the Geohumanities are innately similar in their ability to inspire innovative interdisciplinary collaborations. In his introduction to the recent special issue of the International Journal of Humanities and Arts Computing dedicated to the digital Geohumanities, Nicholas Bauch writes: “geohumanities practice makes sense in the cradle of digital humanities. They share an ethos of experimentation and expression with new media, focusing on techniques born from design fields” (6). The commonalities between the fields, and the groundswell of excitement that they generate, continues to prove fruitful for creative and visceral scholarship.
Back in February we highlighted StoryMapJS as a particularly accessible and useful tool for adding Geohumanities elements to DH projects. This week, and in preparation for our “Mapping Tools 101 for Humanists” workshop that will take place on Wednesday, May 24, from 3-5 p.m. in Knight Library 117, we wanted to highlight an exciting new project that has just gone live.
Launched on April 11, the Mapping Early American Elections project works to overlay early American election data onto 50 maps and visualizations that will represent Congressional and state elections for 24 states. The visualizations will offer new ways to showcase and examine the New Nation Votes collection of electoral returns. In the team’s introductory announcement, they anticipate that the website they develop “will include a map browser inviting users to explore early American political history in exciting new ways. Issues such as changes in voter participation, turnover in Congress and the state legislatures, the growth of party competition, and regional changes in voting patterns will appear with new clarity.” By translating data into visual resources, researchers will be able to consider early America’s political nascence from a new perspective. The project depends upon vital funding from the Division of Preservation and Access at the National Endowment for the Humanities, as did the New Nation Votes collection originally.
Although the benefits of such a project are clear, mining mappable data, and learning how to generate maps with that data, is a massive undertaking. One of the most exciting aspects of the Mapping Early American Elections project is its mission to teach users not just about the transitional political identity of the U.S., but about the tools and techniques necessary for creating future digital humanities projects with similar raw materials. The team intends to generate tutorials so that users can use QGIS to create their own maps. Moreover, they intend to make their data easily accessible to the public.
Are you planning or creating a digital geospatial project of your own? What tools are you using? What tools are you curious about? Or what do you wish there was a tool for, if only someone would help you build it? We’d love to hear from you in the comments.
By Rachel Rochester and Heidi Kaufman
Have you ever considered using Twitter in your scholarship or classes? Please join us on Friday, April 21 from 3-4 in Knight Library 117 to learn from Twitter veterans from the English department, Rachel Tanner and Courtney Floyd. Light refreshments will be served, and the event is free and open to the public. We hope to see you there!
This week, we’d like to congratulate Tara Fickle, Assistant Professor of English, on her newly awarded summer stipend from the National Endowment for the Humanities for her project, “Behind Aiiieeeee!: A New History of Asian American Literature.”
The project will be a digital edition and reconsideration of Aiiieeeee!, a foundational Asian American literary anthology from the 1970s. Dr. Fickle will work with Shawn Wong of the University of Washington, one of the original editors, who has provided Dr. Fickle access to a “treasure trove” of original material related to the anthology. Her plan is to digitize and showcase this material through a series of online interactive learning modules.
Dr. Fickle is one of several scholars at UO to have received support from the NEH for digital humanities work in recent years. The NEH has long been a champion for the digital humanities, and offers further evidence of the importance of the growing field. Back in 2006, the NEH recognized the value of DH and created the Digital Humanities Initiative, which has since changed its name to the Office of Digital Humanities. Now, eleven years out, the department is a critical source of financial support for scholars in our field. The ODH official materials state that networked digital materials have fundamentally changed the ways we “read, write, learn, communicate, and play,” and the ODH supports projects that both study digital culture and “harness new technology for humanities research.” Along with the Mellon Foundation, the Association for Computers and the Humanities, and the American Council of Learned Societies, the NEH is one of the largest sources of funding for digital humanities scholars and projects.
The NEH has many exciting funding opportunities for DH work with upcoming deadlines. Applications for Digital Humanities Advancement Grants are due on June 6, 2017, for the Humanities Open Book Program are due September 13, 2017, and for Institutes for Advanced Topics in the Digital Humanities are due March 13, 2018.
In the most recent federal budget blueprint, the Trump administration proposes to completely defund the NEH. This would be an immense loss for the Digital Humanities community and the humanities at large. As a federal agency, however, the NEH is unable to lobby for itself. As NEH Chairman William D. Adams explained in his statement on the issue, executive branch agencies like the NEH must answer to the President and the Office of Management and Budget, and must abide by budgetary requests. Because of this, people who value the work and support of the NEH must be particularly vocal in their support. Via e-mail with Scott Jaschik, Rosemary G. Feal, executive director of the Modern Language Association, wrote, “The NEH, along with the NEA, are the only federal agencies dedicated to cultivating and curating literary and cultural research and production. The research budget of the NEH is less than 1 percent of the federal budget for scientific research, yet NEH grants provide catalytic effects that have multiplied throughout communities for 50 years now. I trust that the Congress will continue to value and fund these agencies. The MLA’s members will be vigorously advocating for a robust NEH, which we need now more than ever.”
If you would like to express your support for the NEH, there’s still time to do so. Write and call your representatives and mention, specifically, how much you value the NEH. You can find your senators here, and your representative here.
Congratulations again to Dr. Fickle for her tremendous accomplishment!