On this page
Process Area·5 min read·Updated Apr 4, 2026

What Level 1 Complaint Handling Maturity Looks Like in Medical Device Organizations

Identify complaint handling maturity level 1 indicators in your medical device QMS. Gaps in MDR reporting, trending, and CAPA linkage.

The Moment It Becomes Real

The auditor asks to see your complaint trending report. You don't have one. You have a spreadsheet with complaint dates, product codes, and disposition notes. She asks how you identified your last MDR-reportable event. The answer is: a customer called the CEO.

This is not a hypothetical. It is a scene that plays out in medical device companies every quarter, in conference rooms where the quality team realizes — sometimes for the first time — that the distance between what the procedure describes and what actually happens is wider than anyone acknowledged. Level 1 complaint handling is not defined by the absence of a procedure. It is defined by the absence of a system that works.

Where Complaints Go to Disappear

In a Level 1 organization, complaints arrive through whatever channel the customer finds most convenient. A surgeon emails a sales representative about a device that malfunctioned mid-procedure. A distributor in Germany faxes a complaint form to a general inbox that no one monitors daily. A hospital's biomedical engineering department calls the customer service line and leaves a voicemail that gets transcribed three days later. A field service engineer files a report describing a device failure in language that carefully avoids the word "complaint" because no one wants to trigger the paperwork.

Every one of these is a complaint under 21 CFR 820.198 and ISO 13485 section 8.2.2. In a Level 1 system, some of them reach the quality team. Others sit in inboxes, voicemail systems, and field service databases for weeks. A few never surface at all. The organization cannot answer the question every auditor eventually asks: how do you know you have captured all complaints? At Level 1, the honest answer is that you do not know.

This is not a documentation problem. It is a design problem. The complaint intake process was never designed — it grew organically as the company scaled, and no one went back to close the gaps.

Investigation as Individual Art

When a complaint does reach the quality team, what happens next depends almost entirely on who picks it up. One quality engineer conducts a thorough investigation — requests the returned device, examines it against the device history record, documents failure analysis with photographs, and writes a root cause determination supported by evidence. Another closes the complaint the same afternoon with a single line: "Unable to reproduce. No further action."

Both engineers believe they are doing their job correctly. Neither is wrong in any way that the current system can detect, because there is no investigation standard, no minimum evidence requirement, and no quality review before closure. The procedure says "investigate complaints." It does not say what investigation means.

This variability has consequences that extend far beyond individual complaint files. When investigation quality is inconsistent, root cause data is unreliable. When root cause data is unreliable, trending is meaningless. When trending is meaningless, the organization cannot detect the pattern forming across multiple complaints — the signal that a design change, a supplier issue, or a manufacturing drift is generating failures in the field.

The MDR Reporting Gap

MDR reportability is where Level 1 maturity creates the most immediate regulatory exposure. The 21 CFR 803.50 standard requires manufacturers to report events where a device may have caused or contributed to a death or serious injury, or where a malfunction could cause or contribute to such an outcome if it recurred. The EU MDR under Article 87 requires reporting of serious incidents without delay, and no later than 15 days.

At Level 1, reportability determinations are inconsistent because they depend on individual judgment without a structured framework. The organization may not have a documented decision tree. If one exists, it may not be followed uniformly. A complaint describing a device that "shut off unexpectedly during use" might be coded as reportable by one person and non-reportable by another, depending on how they interpret "could cause or contribute to" a serious injury. There is no escalation pathway for borderline cases, no regulatory affairs review, and no precedent database to ensure that similar complaints receive consistent treatment.

The result is predictable. Some reportable events are filed. Others are missed entirely. Still others are identified weeks or months after the awareness date, making timely filing impossible. When FDA reviews the complaint files during inspection — and they always review complaint files — the inconsistencies are visible in the records.

The Metrics That Do Not Exist

Level 1 organizations typically track complaint volume, and even that number may be unreliable because not all complaints are captured. Beyond volume, the metrics that would indicate system health simply do not exist. There is no measurement of cycle time from receipt to closure, no tracking of MDR reporting timeliness against regulatory deadlines, no assessment of investigation quality, and no data on the percentage of complaints evaluated for CAPA.

The absence of metrics is itself diagnostic. A system that cannot measure its own performance cannot improve. It can only react — to audit findings, to FDA observations, to the complaint that finally escalates into a reportable event because no one saw the pattern building.

Building the Foundation

Moving from Level 1 to Level 2 requires five foundational changes, and none of them are optional.

First, establish a single controlled intake channel. Every complaint, regardless of how it arrives at the organization, must funnel into one system within a defined timeframe. This means training every customer-facing employee — sales, service, clinical support, distribution partners — on what constitutes a complaint and on their obligation to forward it immediately.

Second, document and implement an MDR reportability decision tree that any trained quality professional can follow to a consistent determination. Include clear escalation criteria for borderline cases. Validate the tree against real complaint scenarios from the past two years to confirm it produces correct outcomes.

Third, define minimum investigation standards. Specify the evidence that must be collected, the analysis that must be performed, and the documentation that must be completed before any complaint can be closed. Implement a second-person review before closure.

Fourth, implement a complaint classification taxonomy with controlled codes for device problems, patient outcomes, and failure modes. Free-text descriptions alone will never support trending. Coded data will.

Fifth, begin measuring. Track complaint volume by product, cycle time distribution, MDR timeliness, and field completeness. These baseline metrics will reveal the true state of the system — and they will provide the evidence of improvement that auditors and regulators expect to see.

None of these changes require new technology. They require decisions, training, and discipline. The organizations that make them move to Level 2 within six to twelve months. The organizations that do not make them remain at Level 1 until the next audit finding — or the next reportable event that should have been caught earlier — forces the conversation again.

If your organization recognizes itself in these descriptions, an honest assessment is the starting point. Take the Complaint Handling Maturity Assessment to identify your specific gaps and build a prioritized improvement roadmap.

Complaint Handling CMM

8 dimensions · 5 levels · 8 deliverables

Get more insights like this

Subscribe to receive expert perspectives on quality maturity, regulatory changes, and AI in medtech.