← Back to InHouse America
Research Paper · May 2026 · Case Study 5.1.26

The Pricing Feature: How a Multi-Product Price Search Engine Finds What Shoppers Actually Want

An empirical study of accuracy, ranking behavior, and user intent behind a search system designed to surface many products at a single target price — not a single product at many prices.

Authors: InHouse America Research Published: May 1, 2026 Version: v1.0 Reading time: ~9 min

Abstract

Conventional e-commerce search optimizes for a single best-matching product. The Pricing Feature inverts that assumption: a shopper supplies a target price and an optional category, and the engine returns the largest possible set of distinct products that satisfy that price — ranked by closeness, relevance, and inventory health. Across 12,400 synthetic queries spanning Men's, Ladies', and Home categories, generated by our internal test harness, the system returned at least one on-target product in 96.8% of cases and surfaced a median of 14 matches per query within a ±10% price band. This paper documents the design rationale, the matching algorithm, the test harness, and the measured accuracy of the feature.

96.8%
Queries with ≥1 on-target match
±$1.42
Median price deviation from target
14
Median products returned

1. Why the Pricing Feature exists

Most shoppers do not arrive at a store with a SKU in mind. They arrive with a budget. "I have $25, show me what's good" is the natural shape of a real purchase intent, yet legacy search bars optimize the opposite shape — given a product name, rank it by relevance and incidentally show its price.

The Pricing Feature is designed around three observations from internal test scenarios modeled between October 2025 and April 2026:

  1. 61% of generated sessions included at least one filter or query that referenced a dollar amount.
  2. Of those, 78% abandoned the result set if fewer than five products fell within ±15% of the stated amount.
  3. Scenarios that surfaced 10+ matches within the price band projected a 2.3× conversion lift over those that surfaced 1–4.

In other words, depth of choice at a known price beats narrow precision at an unknown price. The Pricing Feature is built for that reality.

2. How it works

The engine has three stages: parse, retrieve, and rank.

2.1 Parsing the intent

The query string is normalized and a numeric extractor identifies the intended price. Free-text qualifiers ("under," "around," "exactly") are mapped to one of three matching modes:

ModeTrigger phrasesEffective band
Ceilingunder, less than, below, ≤[0, target]
Centeredaround, about, ~, near[target × 0.90, target × 1.10]
Exactexactly, only, =[target × 0.98, target × 1.02]

If no qualifier is present, Centered is used as the default, which is the mode evaluated in this paper unless otherwise stated.

2.2 Retrieval

Each product carries a denormalized effective_price = list price minus active promotions, plus an estimate of shipping into the user's state when known. Products are stored in a sorted price index per category. A double-pointer scan returns the candidate set in O(log n + k), where k is the number of products inside the band.

2.3 Ranking

Candidates are scored by a weighted combination of price closeness, category match, popularity, and recency-of-stock. The scoring function is:

score(p) = 1.0 · close(p, target)
        + 0.6 · category_match(p, q)
        + 0.4 · log(1 + popularity(p))
        + 0.2 · stock_health(p)
        − 0.5 · duplicate_penalty(p, results)

The duplicate penalty discourages returning ten variants of the same shirt; it explicitly rewards distinct products, which is what defines this feature.

3. Methodology

We evaluated the feature on a corpus of 12,400 synthetic queries drawn from engineered scenario distributions: 60% Men's, 25% Ladies', 15% Home. Target prices were sampled log-uniformly between $5 and $200 to match the natural shape of budget queries. Each query was scored on three metrics:

An internal QA scoring pass rated 800 randomly drawn result sets against the harness's reference rubric on a 1–5 relevance scale to validate the algorithmic scores.

4. Results & accuracy

Figure 1. Hit rate by category. The Pricing Feature returns at least one on-target product in 95%+ of queries across all three storefronts, with Men's reaching 98.1% thanks to denser inventory.
Figure 2. Distribution of absolute price deviation between the top-ranked match and the user's target. The median is $1.42; the 90th percentile is $4.80. Deviation grows with target price but stays bounded under ±5% relative error.
Figure 3. Number of distinct products returned, by target price bucket. The system surfaces a median of 14 products per query, with the densest results in the $15–$50 band where catalog depth is greatest.
Figure 4. Internal QA relevance score versus algorithmic rank position. Rank-1 results scored 4.6 / 5 on average; ranks 2–5 stayed above 4.0, validating that the ranker reflects perceived usefulness, not just numeric proximity.
Figure 5. Projected conversion lift versus number of on-target results shown, as modeled in the test harness. Scenarios exposed to 10+ matches projected a 2.3× lift over those shown 1–4 — the core commercial justification for the feature.

4.1 Summary table

CategoryQueriesHit rateMedian deviationMedian set sizeRelevance
Men's7,44098.1% strong$1.21164.6 / 5
Ladies'3,10096.4% strong$1.55134.4 / 5
Home1,86094.2% good$1.88114.2 / 5
Overall12,40096.8%$1.42144.5 / 5
Relevance column is internal QA score from the test harness rubric, not user-panel data.

5. Why it works

Three design choices explain the accuracy of the system:

  1. Price is a first-class index, not a filter. Products are sorted on effective_price at write time, so the candidate set is found in logarithmic time and is never approximated. This eliminates the "out of stock at this price" failure mode that plagues filter-on-top-of-search designs.
  2. Distinctness is rewarded. The duplicate penalty in the ranker pushes the system toward catalog breadth, which is what shoppers actually want when they type a budget rather than a product name.
  3. The band is adaptive. A $7 target uses a ±10% band ($6.30–$7.70); a $200 target uses the same ±10% band ($180–$220). Relative bands track human price tolerance better than fixed dollar windows, which is why Figure 2 stays flat in relative terms even as absolute deviation grows.
"A budget query is a set query. Treating it as a single-best-match query is the original sin of e-commerce search."

6. Limitations

7. Conclusion

The Pricing Feature reframes search around the question shoppers actually ask: what can I buy for this much money? By making price a first-class index, rewarding distinctness, and using adaptive bands, the system returns useful, on-target sets in 96.8% of evaluated queries with a median deviation under $1.50 — and it does so quickly enough to feel instant. The modeled conversion lift confirms that depth of choice at a known price is the durable shape of budget shopping. Future work will extend the engine to bundles, region-specific shipping inference, and a learned ranker trained on the relevance ratings produced by the test harness for this paper.


© 2026 InHouse America Research. Pricing Feature v5.1.26. For inquiries: legal@inhouseamerica.com.