What Level 1 Training & Competency Maturity Looks Like in Medical Device Organizations
Discover how training maturity level 1 manifests in medical device companies and what regulatory gaps put your organization at risk during audits.
The FDA investigator asks for training records for the three operators running your Class III assembly line. Records show they each read the assembly SOP and signed within their first week. She asks how you verified they can actually perform the process. Silence. She asks if you've assessed training effectiveness per 21 CFR 820.25. More silence. The 483 writes itself.
This is Level 1 training maturity. Not the absence of training — the absence of competency verification. The organization has records. It has signatures. What it does not have is evidence that anyone can do anything. And the distance between those two states is where regulatory risk, quality failures, and patient safety gaps live.
How You Got Here
No organization designs a Level 1 training system on purpose. It emerges from a reasonable-sounding premise: if people read the procedure, they know the procedure. The problem is that reading is not learning, and knowing is not doing. A manufacturing technician can read an assembly SOP cover to cover and still be unable to perform the torque sequence correctly under production conditions. A quality engineer can sign off on the CAPA procedure and still write investigations that fail to identify root cause.
Level 1 persists because it is efficient. Read-and-sign scales effortlessly. An LMS can distribute a revised SOP to two hundred people overnight and collect electronic signatures by Friday. The completion dashboard turns green. Management review gets a reassuring percentage. Nobody asks the uncomfortable follow-up question: did any of this change what people actually do?
What the Investigator Sees
FDA investigators and notified body auditors have become sophisticated at distinguishing documentation compliance from competency development. They no longer accept a binder of signed training records as sufficient evidence. Instead, they probe.
They ask operators to explain the critical quality attributes of the product they are building. They ask quality engineers to walk through how they would conduct an investigation. They ask design engineers to describe the risk analysis methodology referenced in the procedure they signed. When the answers reveal a gap between what was signed and what is understood, the observation follows.
21 CFR 820.25 requires that personnel be trained to "adequately perform their assigned responsibilities." The word "adequately" demands more than exposure to a document. ISO 13485 Section 6.2 requires the organization to "evaluate the effectiveness of the actions taken" — meaning you must verify that training actually worked. EU MDR Article 10(9) requires personnel to "possess the necessary qualifications and experience." At Level 1, none of these requirements are met in substance, regardless of how complete the signature records appear.
The Tells
Level 1 organizations share a consistent set of characteristics that are visible even from inside.
Training assignments are identical across roles. A quality engineer and a warehouse associate receive the same onboarding packet because the system distributes by department rather than by competency requirement. There is no mapping between what a person needs to be able to do and what training they receive.
The only training trigger is a document revision. When an SOP changes, affected personnel re-sign. But when a deviation reveals a knowledge gap, when a customer complaint points to an operator error, or when a new risk is identified during a process validation — there is no mechanism to initiate targeted training in response.
Retraining is the default corrective action, and it means the same thing as initial training: read and sign. The CAPA system prescribes the remedy that already failed, because no alternative modality exists.
Training metrics consist of one number: percent complete. It tells management nothing about competency, nothing about effectiveness, and nothing about risk. But it fits on a slide, so it persists.
The Real Cost
The regulatory exposure is obvious, but the operational cost is larger. When competency is unverified, deviation rates are higher than they need to be. Investigations take longer because investigators lack the analytical skills that targeted training would develop. Process transfers and scale-ups stumble because the receiving team's capability is assumed rather than assessed.
The most insidious cost is the one nobody measures: the gradual acceptance that training is a compliance exercise rather than a performance tool. Once that belief takes root, the entire organization treats training as overhead. Budget requests for training improvements compete poorly against production investments. The training function shrinks to an administrative role. And the gap between "trained" and "competent" widens until an auditor or an adverse event forces a reckoning.
The Path Forward
Moving from Level 1 to Level 2 does not require new technology or a massive budget. It requires a philosophical shift: from training as documentation to training as competency development. The first concrete steps are straightforward. Map each role to the specific competencies it requires. Introduce at least one assessment method beyond read-and-sign for your highest-risk processes. Define what "effective" training looks like in terms an auditor could verify.
These are achievable steps. They are also urgent ones. Every day that training maturity remains at Level 1 is a day the organization is accumulating unquantified risk — in regulatory exposure, in product quality, and in the capability of the workforce that patients depend on.
Training & Competency CMM
6 dimensions · 5 levels · 8 deliverables