Peer-reviewed evidence

The research
behind HEXAD

HEXAD is one of the few gamification frameworks that moved from practitioner model to validated empirical instrument. These are the studies that got it there — and the ones that sharpened it.

2012
Framework first introduced
9+
Languages validated
31
Curated studies below
200+
Papers in the full literature
What the research has established

Key findings in plain language

Profiles, not labels

Multiple studies explicitly warn against dominant-type assignment as a design decision. Profile-based thinking — treating all six scores as meaningful — performs better than single-type labelling.

Hexad-12 is now the recommended instrument

The 2023 short form outperforms the original 24-item scale on model fit, convergent and discriminant validity. It is not just shorter — it is psychometrically cleaner.

Types shift over six months

Santos et al. (2021, 2023) found dominant orientations change meaningfully within six months. Static personalisation built on a one-off survey becomes stale. Dynamic profiling is the evidence-supported direction.

Personalisation works — but modestly

HEXAD-based personalisation tends to outperform generic design on experience and sometimes performance metrics. Effect sizes are typically small to moderate. It is real but not transformative.

Disruptor is structurally different

Consistently the least common, least psychometrically stable, and most negatively correlated with other types. It may occupy different motivational territory from the other five orientations.

Context can override type

In sustainability and health domains, contextual motivation toward the specific domain can predict gamification preferences better than HEXAD type alone. HEXAD works best as one input among several.

Study library

Studies by domain

Filter by area. Click any study to see the full citation.

About this library. The HEXAD framework has accumulated a substantial literature — the foundational 2016 scale paper alone has been cited over 450 times, and there are an estimated 150–250 papers that administer or directly test the scale, with a much larger pool citing it in passing. This library is a curated selection of the most significant peer-reviewed contributions: foundational scale papers, language validations, mechanic-mapping studies, and the strongest applied and critical work across domains. It prioritises quality and representativeness over completeness. Studies are loaded from studies.json — you can inspect or download the raw data there.
Loading studies…