Awaiting DOI assignment.
Record added: May 24, 2018, 12:48 p.m.
Record updated: Feb. 10, 2020, 5:43 p.m. by The FAIRsharing Team.
No XSD schemas defined
Conditions of Use
|RFC: Submit Errata||https://www.rfc-editor.org/errata.php#reportnew|
|Download ECMA Specification (PDF)||http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-404. ...|
No publications available
No guidelines defined
Models and Formats
No identifier schema standards defined
No metrics standards defined
dbSNP contains human single nucleotide variations, microsatellites, and small-scale insertions and deletions along with publication, population frequency, molecular consequence, and genomic and RefSeq mapping information for both common variations and clinical mutations.
Dryad is an open-source, community-led data curation, publishing, and preservation platform for CC0 publicly available research data. Dryad has a long-term data preservation strategy, and is a Core Trust Seal Certified Merritt repository with storage in US and EU at the San Diego Supercomputing Center, DANS, and Zenodo. While data is undergoing peer review, it is embargoed if the related journal requires / allows this. Dryad is an independent non-profit that works directly with: researchers to publish datasets utilising best practices for discovery and reuse; publishers to support the integration of data availability statements and data citations into their workflows; and institutions to enable scalable campus support for research data management best practices at low cost. Costs are covered by institutional, publisher, and funder members, otherwise a one-time fee of $120 for authors to cover cost of curation and preservation. Dryad also receives direct funder support through grants.
GWAS Central stores genome-wide association study data. The database content comprises direct submissions received from GWAS authors and consortia in addition to actively gathered data sets from various public sources. GWAS data are discoverable from the perspective of genetic markers, genes, genome regions or phenotypes, via graphical visualizations and detailed downloadable data reports.
Open Science Framework
The Open Science Framework (OSF) is a free and open source project management tool that supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery. Features include automated versioning, logging of all actions, collaboration support, free and unlimited file storage, registrations, and connections to other tools/services (ie. Dropbox, figshare, Amazon S3, Dataverse, GitHub). It is 100% free to researchers, open source, and intended for use in all domain areas. OSF has an open, public API to support broad indexing, as well as a partnership with Internet Archive for long-term preservation with a $250k preservation fund and an IMLS grant for transfer to Internet Archive (currently in progress). The OSF supports embargoing during peer review via a view-only link with the ability to anonymize contributor list. It also provides managed access by allowing access requests and private sharing settings. OSF is a non-profit with direct funder support through grants, government contracts, and community memberships.
Zenodo is a generalist research data repository built and developed by OpenAIRE and CERN. It was developed to aid Open Science and is built on open source code. Zenodo helps researchers receive credit by making the research results citable and through OpenAIRE integrates them into existing reporting lines to funding agencies like the European Commission. Citation information is also passed to DataCite and onto the scholarly aggregators. Content is available publicly under any one of 400 open licences (from opendefinition.org and spdx.org). Restricted and Closed content is also supported. Free for researchers below 50 GB/dataset. Content is both online on disk and offline on tape as part of a long-term preservation policy. Zenodo supports managed access (with an access request workflow) as well as embargoing generally and during peer review. The base infrastructure of Zenodo is provided by CERN, a non-profit IGO. Projects are funded through grants.
MorphoBank provides a digital archive of biodiversity and evolutionary research data, specifically systematics (the science of determining the evolutionary relationships among species). MorphoBank aids development of the Tree of Life - the genealogy of all living and extinct species. Heritable features - both genotypes (e.g., DNA sequences) and phenotypes (e.g., anatomy, behavior, physiology) are stored as part of the Tree of Life project. While the genomic part of this work is archived at the National Center for Biotechnology Information (NCBI), MorphoBank is part of the infrastructure for storing and sharing phenotype data, including information on anatomy, physiology, behavior and other features of species. One can think of MorphoBank as two databases in one: one that permits researchers to upload images and affiliate data with those images (labels, species names, etc.) and a second database that allows researchers to upload morphological data and affiliate it with phylogenetic matrices. In both cases, MorphoBank is project-based, meaning a team of researchers can create a project and share the images and associated data exclusively with each other. When a paper associated with the project is published, the research team can make their data permanently available for view on MorphoBank where it is now archived. The phylogenetic matrix aspect of MorphoBank is designed to aid systematists working alone or in teams to build large phylogenetic trees using morphology (anatomy, histology, neurology, or any aspect of phenotypes) or a combination of morphology and molecular data.
Harvard Dataverse is a research data repository running on the open source web application Dataverse. Harvard Dataverse is fully open to the public, and allows upload and browsing of data from all fields of research, and is free for all researchers worldwide (up to 1 TB). Links to related grants, authors, software and research products are provided. Harvard Dataverse supports managed access (with an access request workflow) as well as embargoing generally and during peer review. Dataverse allows users to share, preserve, cite, explore, and analyse research data. It facilitates making data available to others, and allows you to replicate others' work more easily. Researchers, data authors, publishers, data distributors, and affiliated institutions all receive academic credit and web visibility. The Harvard Database receives support from Harvard University, public and private grants, and an emergent consortium model.
Mendeley Data is a multidisciplinary, free-to-use open repository specialized for research data. Data files of up to 10GB can be uploaded and shared. Search more than 20+ million datasets indexed from 1000s of data repositories and collect and share datasets with the research community following the FAIR data principles. Links are available to related authors, software, grants and research. Each version of a dataset is given a unique DOI, and dark archived with DANS (Data Archiving and Networking Services), ensuring that every dataset and citation will be valid in perpetuity. Metadata is licensed CC0, and datasets are and will continue to be free access. Mendeley Data will shortly support managed access, and currently supports embargoing of data both generally and while undergoing peer review. It is funded by a subscription model for Academic & Government entities.
The smartAPI repository is a human and machine accessible database of web-based application programming interfaces that are described using the smartAPI specification (based on the OpenAPI initiative).
PLAZA is a platform for comparative, evolutionary, and functional genomics. The platform consists of multiple instances, where each instance contains additional genomes, improved genome annotations, new software tools, etc.
4DNucleome Data Portal
The 4D Nucleome Data Portal (4DN) hosts data generated by the 4DN Network and other reference nucleomics data sets, and an expanding tool set for open data processing and visualization. It is a platform to search, visualize, and download nucleomics data.
Center for Expanded Data Annotation and Retrieval Workbench
The Center for Expanded Data Annotation and Retrieval (CEDAR) Workbench is a database of metadata templates that define the data elements needed to describe particular types of biomedical experiments. The templates include controlled terms and synonyms for specific data elements. CEDAR is an end-to-end process that enables community-based organizations to collaborate to create metadata templates, investigators or curators to use the templates to define the metadata for individual experiments, and scientists to search the metadata to access and analyze the corresponding online datasets.
GlyGen: Computational and Informatics Resources for Glycoscience
GlyGen is a data integration and dissemination project for carbohydrate and glycoconjugate related data. GlyGen retrieves information from multiple international data sources and integrates and harmonizes this data. This web portal allows exploring this data and performing unique searches that cannot be executed in any of the integrated databases alone.
Information Commons for Rice
Information Commons for Rice (IC4R) is a rice knowledgebase that incorporates rice data through multiple modules such as genome-wide expression profiles derived entirely from RNA-Seq data, resequencing-based genomic variations obtained from re-sequencing data of thousands of rice varieties, plant homologous genes covering multiple diverse plant species, post-translational modifications, rice-related literatures and gene annotations contributed by the rice research community.
Global Research Identifier Database
Global Research Identifier Database (GRID) stores information on research-related organisations worldwide. GRID record contains a unique GRID ID, relevant metadata, and relationships between associated institutions.
RESIF Seismic data portal
The RESIF Seismic data portal offers access to seismological and other associated geophysical data from permanent and temporary seismic networks operated all over the world by French research institutions and international partners, to support research on source processes and imaging of the Earth's interior at all scales. RESIF (French seismologic and geodetic network) is a French national equipment for the observation and understanding of the solid Earth.
UNdata is an international statistical database providing search and download options for a variety of statistical resources compiled by the United Nations (UN) statistical system and other international agencies. The numerous databases or tables, collectively known as "datamarts", contain over 60 million data points and cover a wide range of statistical themes including agriculture, crime, communication, development assistance, education, energy, environment, finance, gender, health, labour market, manufacturing, national accounts, population and migration, science and technology, tourism, transport and trade.
Integrated Carbon Observation System Data Portal
ICOS Data Portal provides observational data and elaborated products on greenhouse gases. Data sets can be visualised and downloaded fully and/or partially. ICOS data meets the global standards for atmospheric, ecosystem flux and marine observations of greenhouse gases. All information on observations and the metadata is stored in persistent long-term repositories.
Sciflection is a chemical database which allows researchers to publish and share their experiments as well as analytical data. Sciflection accepts structured data directly uploaded from Electronic Laboratory Notebooks (open enventory, Sciformation ELN, open for others) in JSON format. The repository is searchable by chemical structure, text parts, numeric parameters, etc.
Bolin Centre Database
The Bolin Centre Database is a storage and management facility for data collected and collated at the Bolin Centre for Climate Research. Most of the data are available with open access and can be used under the terms given in the data description. The purpose of the center is to host all datasets produced within the Bolin Centre, to visualise the data and make the data publicly available.
Scroll for more...
This record is not implemented by any policy.
This record is in need of a maintainer. If you login, you'll be able to claim this record.
Internet Engineering Task Force (IEFT) (Consortium)