Skip to main content

Ruleset Best Practices

How to organize, name, and operate rulesets so they stay manageable as your scoring grows.

Naming Conventions

A clear naming pattern saves hours of confusion later:

  • Include purpose or function ("MQL Scoring", "PQL Usage", "Holiday Campaign")
  • Add version or date when iterating ("MQL v2", "Q1 2026 Refresh")
  • Use consistent formatting across all rulesets in your workspace
  • Avoid generic names — "New Rules" and "Version 2" tell you nothing six months later

Examples that age well:

  • MQL Scoring - Standard
  • Engagement v3 (April 2026 Refresh)
  • Cyber Monday 2025
  • Test
  • Updated Rules

Organizational Strategy

How you structure rulesets matters as much as how you name them:

  • Group by business function or team responsibility. Marketing, sales, customer success often score different things.
  • Separate permanent from experimental. Mark experimental rulesets with a clear suffix like (Experimental) or (A/B).
  • Create clear hierarchies. When testing variations, use a base ruleset and child variations: MQL Scoring, MQL Scoring - Test A, MQL Scoring - Test B.
  • Document relationships. Use the description field to note what an experimental ruleset is testing.

Implementation Approach

Start simple and grow:

  1. One primary ruleset per surface for your main scoring approach.
  2. Add a single experimental ruleset when testing changes.
  3. Add specialized rulesets for key segments or product lines once the basics are stable.
  4. Use temporary, date-bounded rulesets for campaigns or promotions.

Resist the urge to fragment scoring across many rulesets too early. Two well-tended rulesets beat eight neglected ones every time.

Transitioning Between Rulesets

The right pattern for replacing a ruleset:

  1. Create the new ruleset alongside the existing one. Don't make it primary yet.
  2. Activate both. Both produce parallel scores.
  3. Compare. Use the tooltip alternates on the Leads table to see how the new ruleset would tier leads differently.
  4. Adjust the new ruleset based on what you see.
  5. Promote the new ruleset to primary.
  6. Deactivate the old ruleset if you don't need it for historical comparison.

This staged transition prevents whiplash for your team — scores don't jump overnight.

Performance Considerations

Multiple active rulesets each consume compute. While kenbun handles up to four active rulesets per surface, more isn't always better:

  • Keep rulesets focused. A ruleset with a tight purpose is easier to reason about and faster to evaluate.
  • Archive obsolete rulesets. Inactive rulesets don't run, but they clutter the UI. Delete the ones you've definitively moved past.
  • Watch for redundant rules. If multiple rulesets are scoring the same event with similar weights, you may be double-counting in some downstream calculation.

Auditing and Maintenance

A quarterly cadence works for most teams:

  • Monthly: spot-check a handful of leads to confirm scores match intuition.
  • Quarterly: review every active ruleset. Are the rules still firing on real events? Are weights still right?
  • Annually: major scoring review aligned with ICP/ideal customer reviews.