On this page
Process Area·6 min read·Updated Apr 4, 2026

What Level 2 Training & Competency Maturity Looks Like in Medical Device Organizations

Learn what training maturity level 2 looks like in medical device companies and how developing organizations begin building competency-based systems.

Your LMS tracks completion rates. 97% of training assignments are completed on time. Your quality metrics show no improvement. Root cause investigations keep finding "inadequate training" as a contributing factor. You're training everyone. Nothing's changing. What's going on?

This is the defining paradox of Level 2 training maturity. The organization has invested in infrastructure — a training procedure, a learning management system, role-based matrices, even some hands-on assessments for critical processes. The system looks credible. The data looks good. And yet the gap between training activity and workforce competency remains stubbornly wide.

The answer is that completion is not competence, and Level 2 organizations have built sophisticated systems for measuring the wrong thing.

The Plateau Explained

Level 2 represents genuine progress from the reactive chaos of Level 1. Training requirements are documented. An LMS automates assignment and tracking. Some roles have training matrices that specify which procedures and work instructions apply. High-risk manufacturing processes may include hands-on qualification steps. The organization has a training SOP that describes a reasonable process.

But read-and-acknowledge remains the primary training modality. For the vast majority of training events — SOP revisions, GMP refreshers, new procedure rollouts — the experience is identical to Level 1: read the document, click "acknowledge," move on. The LMS captures the click. The dashboard reflects the completion. Nobody verifies whether anything was learned.

This creates a dangerous illusion. Management sees 97% on-time completion and concludes the training system is working. Quality sees recurring human-error deviations and concludes it is not. Both are looking at the same system and reaching opposite conclusions because they are measuring different things.

The Level 2 Tells

Organizations at this maturity level exhibit a consistent pattern of characteristics. Recognizing them is the first step toward breaking through the plateau.

Training matrices exist but do not drive decisions. The matrix was built for a certification audit or a major customer request. It lists SOPs by role. But it does not specify what competency each training activity should produce, what assessment method will verify that competency, or how frequently competency should be reassessed. It is a compliance artifact, not a management tool.

Assessments are present but not rigorous. Where assessments exist, they tend to test recall rather than application. Quiz questions ask what a procedure says, not what a person should do when reality deviates from the procedure. Practical evaluations lack standardized criteria — two supervisors observing the same operator may reach different conclusions about competency. Pass rates are near 100%, which sounds reassuring until you realize it means the assessments cannot differentiate between competent and not-yet-competent personnel.

Training needs analysis is event-driven. The organization conducts training needs analysis when something forces it: a product launch, a major nonconformance, an audit finding, a regulatory change. There is no systematic, periodic review of organizational capability against current and emerging requirements. Departments with proactive managers maintain better training than departments where training is treated as overhead. The quality of workforce development depends on individual leadership rather than organizational infrastructure.

The gap between policy and practice is widening. The training SOP describes a competency-based approach. Actual practice is read-and-sign with exceptions. This gap is particularly dangerous because it means the documented system cannot be relied upon as a description of reality — exactly the condition that auditors are trained to identify.

Documentation quality varies by department. Some departments maintain detailed training records with assessment results and retraining schedules. Others produce only completion timestamps. When an auditor samples across departments, the inconsistency itself becomes a finding, because it reveals that the training procedure is not being implemented uniformly.

Why Organizations Stall Here

Level 2 is comfortable. The LMS generates reports. Auditors see a training procedure and matrices and records. The most obvious regulatory gaps from Level 1 have been closed. There is no burning platform that forces further investment.

The forces that hold organizations at Level 2 are practical, not philosophical. Meaningful competency assessment takes time — time on the production floor, time from supervisors, time from subject matter experts who are already stretched thin. Read-and-sign takes minutes. Observed competency demonstration takes hours. When production schedules are tight, the faster modality wins.

Additionally, the training function at Level 2 typically lacks the organizational authority to mandate change. Training coordinators can send reminders and escalate overdue assignments. They cannot require a manufacturing director to allocate floor time for competency assessments. The system improves only when leadership decides that verified competency matters more than completion percentages.

The Regulatory Exposure

Level 2 organizations satisfy a surface reading of 21 CFR 820.25 and may pass routine audits. But they are vulnerable to any inspector who probes beyond the records. The effectiveness evaluation requirement in ISO 13485 Section 6.2 is the specific weak point: "evaluate the effectiveness of the actions taken." A completion timestamp does not constitute effectiveness evaluation. An assessment that everyone passes does not demonstrate that the assessment discriminates between competent and not-yet-competent personnel.

Notified bodies conducting EU MDR assessments are increasingly asking for evidence that training produces qualified personnel — not just evidence that training occurred. Level 2 organizations struggle to provide this evidence because their systems were designed to track activity, not verify outcome.

Breaking Through

The transition from Level 2 to Level 3 requires one fundamental change: making competency criteria — not completion tracking — the primary measure of training success. This means defining, for each role and each critical process, what a competent person can demonstrably do. It means selecting assessment methods that can actually distinguish between competent and not-yet-competent performance. And it means building the organizational discipline to act on assessment results, including the willingness to restrict personnel from performing tasks they have not demonstrated the ability to perform.

Standardization before sophistication. Before investing in advanced learning technologies or elaborate competency frameworks, ensure that existing processes are applied consistently across every department, every site, and every role. The inconsistency that characterizes Level 2 is both its most visible symptom and its most actionable improvement opportunity.

Training & Competency CMM

6 dimensions · 5 levels · 8 deliverables

Get more insights like this

Subscribe to receive expert perspectives on quality maturity, regulatory changes, and AI in medtech.