Skip to main content

Explore the Annotation Details Page

The Annotation Details Page provides in-depth access to all settings, assignments, and tracking tools for a specific annotation project in Gesund.ai. This tutorial walks through each tab within the page, covering everything from team setup to review metrics and backup options.

1. Overview

The Overview tab gives you a high-level summary of the project. It displays total studies, annotation and review progress, annotator activity, and label tracking. Project managers can use this page to monitor overall status, ensure team members are completing tasks, and verify that reviews are being conducted.

2. Team

In the Team tab, you can manage user roles and assignments. Each member can be assigned roles such as Annotator, Reviewer, Manager, or Guest. You can also toggle whether a user is responsible for Data Quality Control. This tab ensures the right people have the right permissions throughout the project.

3. Configuration

The Configuration tab defines the structure and rules for your annotation project. Here, you set segmentation targets like Dice and ICC scores for each label (e.g., left lung, right lung), enable AI assistance, and configure auto-loading of predictions. These settings ensure consistency and quality across all annotations.

4. Study Table

The Study Table lists each study in your dataset, along with assignment status, completion progress, and modality details. Annotators can view how many instances exist per study, which labels have been applied, and who is responsible for each case. This is your central hub for tracking work at the study level.

5. Analytics

The Analytics tab provides detailed timing and activity logs per user and study. It helps project managers understand how long each annotation takes and monitor progress trends. You can evaluate duration, start and update times, and completion ratios for each assigned user.

6. Pre-Annotation

In the Pre-Annotation tab, you can view which AI models are deployed and linked to your dataset. It includes model details such as version, modality, and anatomy focus. You can also see dataset metadata like source hospital, modality type, and number of studies. This section supports setting up model-driven annotations.

7. Annotation Agreement

The Annotation Agreement tab helps evaluate inter-annotator reliability. It offers metrics like Dice scores, ICC, and segmentation complexity, allowing you to measure annotation consistency across users. This is critical for quality assurance and research-grade datasets.

8. Guideline

Use the Guideline tab to add annotation instructions for each label. Guidelines can include text, images, or external links to ensure annotators understand labeling requirements. Providing clear guidelines helps standardize annotations across different users.

9. Backup

The Backup tab lists any backups generated for your project. This ensures that your annotation data can be safely restored if needed. While this page may be empty during early stages, it's an important safeguard for long-term projects.


Note: This video demonstrates each tab within a single project. While labels and features may vary, the tab structure remains consistent across projects.