"IMHO, simply making all these discussions public via a basic mailing list could help people like me ... have a better awareness of what's going on... We could add our comments / identify possible drawbacks / make some "scalability tests"... In fact I'm really eager to participate to this process" (developer in Belgium)To kick things off, we plan to make better use of this blog and have set a target of posting 2-3 times a week. This is a technical blog, so the anticipated audience include developers, database administrators and those interested in following details of the GBIF software development. We have always welcomed external contributers to this blog and invite any developers working on publishing content through the GBIF network, or developing tools that make use of content discoverable and accessible through GBIF to write posts.
Today we are pleased to welcome Jan Legind to the team who will be working as a data administrator to help improve the frequency of the network crawling (harvesting) and the indexing processes. Jan will be working closely with the data publishers to help improve the quality and quantity of content accessible through GBIF.
The GBIF development group has expanded in the past 6 months, so I'll introduce the whole team working in the secretariat and contracted to GBIF:
- Developers (in order of appearance in the team): Kyle Braak, José Cuadra, Markus Döring (contracted in Germany), Daniel Amariles & Hectór Tobón (contracted at CIAT in Colombia), Federico Méndez, Lars Francke and Oliver Meyn.
- Systems architect: Tim Robertson
- Systems analyst: Andrea Hahn
- Informatics liason: Burke (Chih-Jen) Ko
- Systems admins: Ciprian Vizitiu & Andrei Cenja
- Data administrator: Jan Legind
The current focus of work at GBIF include the following major activities:
- Developing and rolling out the Integrated Publishing Toolkit.
- Integrating the checklist (taxonomic, nomenclatural and thematic) content into the current Data portal.
- Developing a processing framework to automate the steps needed to apply quality control and index content for discovery through the Data portal.
- Specifically to reducing the time taken and complexity in initiating a rollover of the content behind the index
- Reworking all quality control (geographic, taxonomic and temporal)
- Automating the process
- Initiating a redesign of the data portal user interface to provide richer discovery and integration across dataset metadata, checklists and primary biodiversity data.
- Reducing the time between publishing content onto the network and discovery through the Data portal. This includes providing specific support to those who are experiencing problems with large datasets in particular, and assisting in migration to the DarwinCore-Archive format.
- Technical and user documentation of the publishing options available
Let the blogging begin.
[Please use #gbif in twitter hashtags]