Functionality
 

Defining a systematic review

paragraph image
After logging into the system the first step is to define/set-up a systematic review. Defining a systematic review requires exact descriptions of the following components:
  • Interventions covered in the review
  • Groups of participants (usually this includes the overall group of participants in the trials; in addition specific sub-groups of participants might be defined e.g. males and females or participants 60 years of age)
  • Outcomes of interest (currently, RevBase is able to manage binary, continuous, and time-to-event outcome data)
  • Timepoints of the outcome measure (different definitions of timepoints can be used: 1) a single timepoint e.g. at 3 months; 2) a time window e.g. between 4 and 6 months; 3) descriptive information only or in conjunction with a single timepoint or a time window)
  • Assignment of user roles
  • Data collection strategy
  • Designing data extraction forms
By using the interventions entered, RevBase provides all possible combinations to form potential comparisons for the review. The responsible review author has then to decide which of these potential comparisons are relevant for the particular review.
Besides the interventions, groups of participants, outcome measures, and timepoints have to be defined. For all of these, the system has a built-in possibility for indexing with Medical Subject Headings. At the final step the system generates all possible combinations consisting of comparisons, groups, outcomes, and timepoints and the responsible review author has to decide which are the ones relevant for a particular review (called PICOTs in RevBase). Please note that interventions, comparisons, groups of participants, outcome measures, and timepoints can be added at any time during the review process. Changes can also be made but all data already extracted for affected PICOTs will be lost.
The responsible review author has to decide at the beginning which data collection strategy is to be used within a specific review (this strategy might be changed during the review under certain conditions, though). Usually, all steps in a review are done in duplicate including data extraction. The responsible review author needs to decide whether a duplicate strategy with consensus is to be used for each step or whether there are steps that are done by one person only. If in doubt the duplicate strategy should be used for all steps because this strategy also allows for single user data processing.
RevBase already provides a core set of data extraction sheets consisting of items related to the general description of a study, quality assessment, outcome data (core dataset). The outcome data extraction sheets cover binary/dichotomous, continuous and time-to-event data extraction. The sheets for the core dataset can not be changed by users. However, the responsible review author can easily design additional data extraction sheets related to the description of the study, participants characteristics, quality assessment, or outcome data (custom dataset). Data extraction sheets can be added and changed at any stage of the review process. However, changes to data fields with which data was already extracted will cause the system to delete all data already collected with the specific data field.

Importing references into a systematic review

paragraph image
After searching for reports in external data sources (e.g. Medline) bibliographic information can be imported using RIS-formatted references by simple uploads of formatted text files.

Scanning for duplicate references

paragraph image
Scanning for duplicate references can be done by searching for matching characteristics in references e.g. same publication year, start page of article and journal name.

Screening of references (Title/abstract evaluation)

paragraph image
After the import, titles and abstracts can be screened either by a single user or by multiple users. Possible categorization of references are: "potentially relevant", "not relevant", and "not relevant but interesting". Reasons for exclusion can be given. The following categories are implemented: "duplicate reference", "study design", "population", "experimental intervention", "control intervention", "outcome measures", "other". There is also a freetext field for comments available.

For each of the references, one or more documents can be uploaded. Supported file types include pdf, html, doc, rtf, txt, and graphic formats.

Consensus on screening

paragraph image
Before moving on to the next step a consensus on which references to include in the fulltext evaluation needs to be done. To facilitate consensus, any discrepancies are highlighted in red by the system. The system also allows for filtering out only the discrepancies for the consensus. In case of a single person no consensus is required. Fulltext evaluation of references can only be done on consensed references but the consensus need not to be done after all references had been screened. Rather, references screened in duplicate can be consensed and moved to the next step whereas references not consensed have to wait for consensus before moved to the next step.

Eligibility of references (Fulltext evaluation)

paragraph image
After the consensus on which references are worth more detailed evaluation, fulltexts of references can be evaluated for inclusion. Possible categorization of references are: "relevant", "not relevant", and "unclear". Reasons for exclusion can be given. The following categories are implemented: "duplicate reference", "study design", "population", "experimental intervention", "control intervention", "outcome measures", "other". There is also a freetext field for comments available.
Fulltext evaluation can either be done on screen based on uploaded online documents or based on paper copies of reports.

Consensus on eligibility

paragraph image
After fulltext reports were evaluated in duplicate a consensus on which reports should be included in the review needs to be done. To facilitate consensus, any discrepancies are highlighted in red by the system. The system also allows for filtering out only the discrepancies for the consensus. In case of a single person no consensus is required.

Assigning reports to studies and eligibility of studies

paragraph image
Because a systematic review is based on studies and not references each reference has to be assigned to a particular study. Usually, a sudy is named according to the first author and publication year of the main report of the study. The system suggests a study name on this basis but the user is free to overwrite it.
Documents related to references are also automatically assigned to studies.

Data extraction

paragraph image
Note: before starting data extraction, data extraction sheets might need to be designed (see above).
Data extraction using RevBase is straightforward. A codebook with general guidance expecially regarding quality assessment is incorporated in RevBase and available online for each of the relevant questions. However, users a free to use their own guidance.
Data extraction can be done using paper copies of the relevant reports or on screen by diplaying the uploaded files.

Consensus on data extraction

paragraph image
After duplicate data extraction is complete for a study a consensus on the extracted data needs to be done. To facilitate consensus, any discrepancies are highlighted in red by the system. The system also allows for filtering out only the discrepancies for the consensus. In case of a single person no consensus is required.

Data export

paragraph image
Currently, RevBase supports export of data in tab-delimited format only.
Note: To ensure reproducibility of your systematic review and meta-analysis you should freeze your review using the freeze functionality. Data can still be changed, however, but not the freezed version of the data.

[back]

Imprint  |  Content  |  Data  |  Copyright  |  Links  |  Privacy Policy  |  Jurisdiction