Test Standards Checklists for Multi Point Lock Qualification
I’ve been in those qualification calls where everyone nods at the sample board, someone flicks the handle twice, somebody from sourcing says the finish “feels premium,” and suddenly a half-tested lockset starts getting treated like it’s already production-safe across every sash size, reinforcement scheme, and weather condition. That’s how bad decisions happen.
Here’s the ugly truth: a multipoint lock almost never fails in the polished, showroom-safe way buyers imagine. It fails in the mess—under stack-up, under side load, after cycling, when the keeper’s a hair off, when the rods start dragging, when the sash has just enough sag to expose every lazy assumption the team made back in sampling. You know what I mean if you’ve ever watched hardware go “fine” to “why is that top point not landing?” in one afternoon.
And I frankly believe this is where procurement teams get seduced by the wrong evidence. Ordering from an aluminum alloy casement window handle supplier is sourcing. Looking at a slim black aluminum casement window handle lock and saying it looks market-ready is aesthetics. Specifying a customized black window lever handle because it photographs well? That’s branding, not qualification. The lock doesn’t care how confident the catalog sounds.
So what are you actually approving?
Table of Contents
The standards that actually matter
I’m going to say this plainly because the industry loves fuzzy language: ANSI/BHMA A156.37-2025 is the backbone here, and once you read the actual requirements, a lot of marketing copy starts sounding suspiciously thin. The summary is blunt enough to be useful—Grade 1 lever-operated locks with no load on the latching must open with a maximum torque of 28 in-lbf; Grade 1 multipoint locks must survive one million operating cycles with a 10-pound axial load simulating a door closer; and Grade 1 bolt strength is tested at 1,350 pounds with all latching points engaged. BHMA also ties corrosion evaluation to ASTM B117 and sorts these products into Grades 1, 2, and 3. That’s not fluff. That’s the stuff that ruins weak hardware.
But BHMA by itself? Not enough. Never was. And I’ve seen teams act like a hardware standard somehow blesses the whole fenestration assembly, which is nonsense if you’ve spent any time around NAFS, project specs, or coastal jobs. The 2022 edition of AAMA/WDMA/CSA 101/I.S.2/A440 replaced the 2017 edition and kept the North American harmonization effort moving, while WDMA Hallmark certification still focuses on finished window and door performance—air, water, structural, impact—not just a handsome piece of trim bolted to a frame. That distinction is where good qualification lives or dies.
And then there’s Miami-Dade, which has zero patience for brochure logic. Their guidance for a casement window with three separate locks that aren’t activated by single-action hardware says you run air infiltration with all locks engaged, then do the 75-mph load test and water test with only the center-most main lock engaged, and then forced-entry resistance with all locks engaged. Read that again and the whole trick becomes obvious: the sequence is designed to expose whether your “extra points” are doing real work—or just making the drawing look robust.

My qualification checklist is not polite
From my experience, qualification starts getting honest the second you stop asking whether the lock is “good” and start asking what exact failure path the test program is trying to smoke out. Different question. Better question. Because a custom black casement window fork handle may be perfectly serviceable on one sash geometry and a total headache on another, and a custom black casement window handle can feel tight and crisp at sample stage while the full hardware train—gearbox, rods, keepers, strikes, reinforcement, fasteners, profile tolerances—quietly drifts into trouble the moment production starts shaving pennies.
I always want five things on the table, fast. Exact certified configuration. Governing test stack. Worst-case specimen. Failure criteria. Post-test change log. Not later—now.
Because here’s the ugly truth nobody likes admitting in supplier meetings: the fifth item is where the bodies are buried. That’s where you find out the coating vendor changed, the wall thickness got “optimized,” the keeper geometry moved, the screws were swapped, the internal zinc grade shifted, or some well-meaning team decided an “equivalent” component was close enough. Close enough is famous last words in hardware qualification.
And yes, I’m biased. I think the phrase “equivalent part” should make engineers visibly uncomfortable.
The numbers that make skeptics pay attention
Yet whenever someone tells me I’m being too hard on qualification, I go look at the public record—and the public record is not exactly forgiving. The U.S. Consumer Product Safety Commission said it completed 333 cooperative voluntary recalls in FY 2024, including 166 through Fast Track. That’s not background noise. That’s a giant blinking sign telling you defects still make it into the field, onto homes, and into legal headaches with depressing regularity.
That’s the macro view. The product-level stuff gets uglier.
In November 2023, CPSC announced a recall of about 1,900 MI Windows and Doors sliding glass doors because the glass could separate from the frame during hurricane conditions; the recalled units carried LC-PG50 performance labels, sold in coastal U.S. regions, and were priced between $2,000 and $7,000. Then, one month later, CPSC announced a recall of about 12,000 Pella Architect Series casement windows because the sash could detach from the frame and fall; those units sold nationwide from January 2021 through July 2023 for $700 to $10,000 per window. Different OEMs, different failure modes, same nasty lesson: teams still keep confusing component confidence with system qualification.
I’ve heard the pushback. “That’s not a lock failure.” Sure—but that’s exactly the point. Field failure doesn’t care about your org chart. The consumer experiences the assembly, not your departmental boundaries.
And security? People oversimplify that too. The FBI’s 2024 national crime summary says property crime fell an estimated 8.1% from 2023 to 2024 and burglary fell 8.6%, but forcible entry still remained the most common method of entry among reporting agencies. So no, I don’t shrug at bolt strength, cycle counts, handle torque, rod engagement consistency, or forced-entry protocol. I treat them like the boring details that keep lawsuits, callbacks, and ugly distributor calls off your calendar.

The qualification table I would put in every sourcing pack
| Qualification gate | Standard or protocol | What I expect to see | What usually goes wrong |
|---|---|---|---|
| Operating force | ANSI/BHMA A156.37 | Measured opening/closing torque by trim type, with test setup documented | Handle feels fine in hand but spikes under real latch load |
| Durability | ANSI/BHMA A156.37 | Long-cycle report tied to exact lockset configuration and mounting details | Supplier swaps parts after testing |
| Strength/security | ANSI/BHMA A156.37 | Bolt strength and forced-entry evidence with all active points engaged | Secondary points do not engage consistently |
| Air/water/structural | NAFS / WDMA Hallmark path | Finished-product certification for the actual window or door family | Hardware certified, assembly not certified |
| Impact/hurricane | ASTM E1886/E1996 or TAS path | Largest-size or worst-case specimen evidence, clear pass criteria | Marketing claims based on smaller or different specimens |
| Corrosion/finish | ASTM B117 where applicable | Coating system, hours, exposure conditions, and post-test function | Nice finish, bad substrate discipline |
| Configuration control | Internal QA + certification records | BOM freeze, revision log, supplier declarations, retest triggers | “Equivalent” parts quietly introduced |

What I would demand before approval
But I still see teams approving hardware packages with almost none of the paperwork that matters. That’s insane to me. I want the test report, the specimen build, the revision-controlled BOM, the install instruction used during testing, the lab identity, the fail criteria, the retest trigger, and the certification scope. Not a one-page summary. Not a supplier deck. Not a “meets standard” line item jammed into a spreadsheet as if that settles anything.
And I want the ugly development history too—the stuff suppliers usually try to smooth over. Show me the keeper wear. Show me the top rod drag. Show me the corner sag that threw the engagement off by just enough to start intermittent field complaints. Show me which geometry got changed after pilot. Show me the screw pull-out problem that only showed up after cycling. Because if a vendor tells me their prototype path was clean from day one, my guard goes up (fast).
That kind of transparency is also why I don’t over-romanticize component sourcing pages, even useful ones. Whether you’re looking at a silver lift sliding door handle with flush pull set or a casement hardware variant, the component only earns trust when the assembly data backs it up. Before that, it’s just a candidate.
FAQs
What is ANSI/BHMA A156.37 for multipoint locks?
ANSI/BHMA A156.37 is the American National Standard that defines how multipoint locks are categorized and evaluated for operation, durability, strength, security, and related performance attributes, so specifiers can compare products using a shared testing framework instead of relying on vague claims or sales language. I like it because it forces hardware discussions out of the vibe zone and into measurable performance.
What is a multipoint lock qualification checklist?
A multipoint lock qualification checklist is a controlled approval document that verifies the exact lock assembly, test standard, specimen size, hardware revision, pass criteria, and supporting lab evidence required before a window or door system is released for production, specification, or market launch. In plain English, it’s the paper trail that stops a tested build from quietly turning into a cheaper production build.
How do you test a multipoint lock properly?
Testing a multipoint lock properly means evaluating the full installed system for operating force, cycle durability, strength, security, corrosion behavior, and where relevant air, water, structural, impact, and forced-entry performance under the governing standard sequence for that product and jurisdiction. Bench testing alone won’t save you, because the real gremlins usually live at the hardware-to-frame interface.
Does Miami-Dade or hurricane testing replace BHMA testing?
Miami-Dade or hurricane testing is a regional and performance-specific qualification path for wind, debris, pressure, and related envelope behavior, while BHMA testing addresses the hardware’s own operational, durability, and strength expectations, so one does not replace the other when the product must satisfy both hardware and fenestration requirements. If your project needs both, do both—trying to shortcut that is how teams buy themselves future pain.

Final take
A multipoint lock isn’t qualified because the handle feels expensive, because the finish looks sharp in a sample box, or because somebody dropped “tested to standard” into a PDF and hoped nobody would ask follow-up questions. It’s qualified when the exact delivered configuration survives the right test stack, on the right specimen, with records tight enough to hold up under distributor scrutiny, warranty pressure, regulatory review, and the kind of internal finger-pointing that starts the minute callbacks show up.
That’s my bar. It should probably be yours too.
If you’re building a real qualification checklist around multipoint lock standards—not a pretty one, a useful one—start by mapping the exact hardware configuration, the governing test pathway, and the revision-control rules before you sign off on a single part number.



