Skip to content

2020 Workcycle Sprint 6

Closed Mar 27, 2020 0% complete

Google Books:

  • create valid Google Book objects (e.g. rightsMetadata, contentMetadata, descMetadata)
    • testing by Andrew
  • export OCR content
  • complete devops docs
    • nagios monitoring: disk on GB and SDR API
    • SLA levels
  • update symphony with current status
  • Additional testing of techMetadata service and turn on in production
  • export of objects in a given app s…

Google Books:

  • create valid Google Book objects (e.g. rightsMetadata, contentMetadata, descMetadata)
    • testing by Andrew
  • export OCR content
  • complete devops docs
    • nagios monitoring: disk on GB and SDR API
    • SLA levels
  • update symphony with current status
  • Additional testing of techMetadata service and turn on in production
  • export of objects in a given app state

SDR Evolution:

  • Expand number of cases covered by infra-integration testing
    • ETDs
  • ETD Update Analysis and planning
    • update persistence layer
  • Lyberservices
    • migrate a common lyberservices-scripts use case to pre-assembly
  • use new dor-services-app endpoint for re-accessioning
    • needs testing in pre-assembly
    • needs to be used by goobi?
  • test admin tags backed by Db in staging env
  • Deploy all the things
  • Write more integration tests

Preservation Migration:

  • Complete existing known object remediation
  • Dry-run of the first migration weekend (NOTE: deferred until April 3)

Stretch goals:

SDR Evolution:

  • Investigate autogen of cocina models from OpenAPI
  • SURI (rails service)
    • Plan and prep for migrating off of old Oracle based service
    • Plan and prep for move to new Rails based service
  • Testing the migration to the new data model
    • Run all existing objects through the cocina mapper to see where the gaps are
    • Set up a design meeting (Andrew + all devs, optional) to answer questions:
      • How to run the migration?
      • How to chunk up/sequence the data (by object/content type, by collection, by APO, by druid alpha, by registration date, by tag)?
        • Noting that web archives, ETDs, GIS, and other objects are different than others
      • How can we do this work without taxing current systems?
      • How to find/identify problematic gaps?
        • Missing source IDs for some objects
      • Do we store the mapped data for further analysis?

Preservation Migration:

  • investigate/planning for upcoming workcycle

This milestone is closed.

No open issues remain. View closed issues or see open milestones in this repository.