Technical notes
All estimates contained in the Dementia Awareness Survey report are based on information obtained from people aged 18 and over from all states and territories.
Methodology
The Social Research Centre (SRC) was commissioned by the AIHW to conduct the survey fieldwork. The survey was conducted from 24 July to 15 August 2023. This included a soft launch period from 24 to 25 July 2023, where a small number of people completed the questionnaire to ensure the questionnaire was performing as intended.
The AIHW drafted the questionnaire in consultation with the Dementia Awareness Survey Reference Group, based on existing validated scales developed in Australia as well as questions developed specifically for the survey based on literature review and expert review (Appendix). Three sets of validated scales were used in the survey to measure 1) general dementia knowledge, 2) knowledge of dementia risk factors and misconceptions, and 3) dementia-related stigma.
The Dementia Knowledge Assessment Scale (DKAS; Annear et al. 2017) was used to measure what people know about the most common forms of dementia. It has been widely used in Australia and internationally, which enables comparison with other studies. It comprises statements about the most common forms of dementia that are factually correct or incorrect (Annear et al. 2017). An item score of 0 indicates an incorrect response to a factually true or false statement or an acknowledgment that the respondent does not know the truth of the statement. A score of 1 indicates an individual’s assessment that an item is probably true or probably false with some remaining uncertainty. A score of 2 indicates an individual’s unequivocal alignment with the correct response. The total DKAS score was calculated by summing all 25 items and ranges from 0–50, with a higher score representing better dementia knowledge.
Knowledge of Dementia Risk Reduction (KoDeRR; Bartlett et al. 2022) measures the knowledge of evidence-based risk factors of dementia as well as common misconceptions (or myths). Knowledge of modifiable factors was calculated where ‘strongly agree’ was scored 2 for each risk reduction strategy, ‘agree’ scored 1 and other answers scored 0. The total correct contribution score ranges from 0–28. Scores were also calculated for the six common misconception items by the reverse process where ‘strongly disagree’ was scored 2, ‘disagree’ scored 1 and other answers scored 0, ranging from 0–12, where a score of 12 represents correctly disagreeing with all 6 misconceptions. The overall KoDeRR score was calculated by adding these two sub-scores.
The Dementia Public Stigma Scale (DePSS) is a scale designed to measure dementia-related public stigma amongst community-dwelling adults (Kim et al. 2022). DePSS was validated with over 3,000 Massive Open Online Course (MOOC) enrolees (those who had not previously enrolled in the dementia MOOCs). As most MOOC enrolees are from Australia, the acceptability of this scale has been tested with the Australian public. Five factors of the scale broadly reflect all three (cognitive, emotional, and behaviours) aspects of stigma. It has sixteen items measuring cognitive (dementia-related stereotypes), emotional (negative prejudices and emotional reactions), and behavioural (discriminatory behaviours) aspects of stigma, each item with a seven-point Likert-type scale (1 = ‘strongly disagree’ to 7 = ’strongly agree’). The total DePSS score was calculated by summing all 16 items (including 6 reversed items), with total possible scores ranging from 16–112. A higher score indicated higher public stigma of dementia. Sub-scores were calculated by summing relevant items for three aspects of stigma (possible scores range from 10–70, 4–28, and 2–14 for cognitive, emotional, and behavioural aspects of stigma respectively).
In total, twenty interviews were completed with participants from various locations across Australia to identify potentially problematic areas of the questionnaire. Recruitment was completed using the SRC’s dedicated qualitative participant database. Participants were sent a primary approach email and asked to fill out a brief screening questionnaire and could then be invited to participate by phone. The purpose of the research was explained, and a convenient time and date scheduled for each participant to join an online video interview. All participants were provided with an information sheet outlining the purpose of the research in greater detail. Participants were reimbursed with an $80 e-voucher for their participation. All participants gave consent verbally prior to taking part in the interview.
Cognitive response processes examined through cognitive testing included:
- comprehension of survey items
- retrieval from memory of relevant information
- judgement or decision processes used when answering survey items
- response processes.
After addressing issues arising from cognitive testing in consultation with project stakeholders, an updated AIHW endorsed version of the survey tool was finalised. This included details of all introductory scripts, sequencing instructions, validity checks, conditional displays (e.g., variations between the Life in Australia™, language other than English (LOTE), and Remote versions), display preferences (e.g., display of multiple questions on the same page) and functionality specifications.
Prior to fieldwork starting, standard operational testing procedures were applied to ensure that the script reflected the agreed final electronic version of the questionnaire. These included:
- programming the skips and sequencing instructions as per the final questionnaire
- rigorous checking of the questionnaire in ‘practice mode’ by the SRC including checks of the on-screen presentation of questions and response frames on a range of devices
- randomly allocating dummy data to each field in the questionnaire and examining the resultant frequency counts to check the structural integrity of the script
- rigorous checking of programming of the skips and sequence instructions as per the final questionnaire.
Rigorous checking of the questionnaire in ‘test mode’ by the SRC, including checks of the on-screen presentation of questions and response frames for all three panels (Life in Australia™, LOTE, and Remote). Representatives from AIHW also contributed to testing the online survey (Life in Australia™, LOTE, and Remote versions) prior to data collection, including testing translated versions of the questionnaire.
Soft launch (formal pilot) testing was undertaken on Life in Australia™ to confirm the integrity of the questionnaire. This involved initiating a small number of offline records on the first planned day of fieldwork, 24 July 2023. The interviewing team was debriefed, and SRC checked ‘Day 1’ data (i.e., one day after the survey soft launch) to ensure that data collection was operating properly in the live survey instrument and as per the final questionnaire (see Appendix).
To improve the representation of people from a non-English speaking background, the online survey and supporting participant information sheets, the SRC project team and AIHW chose five languages for translation: Traditional Chinese, Simplified Chinese, Arabic, Vietnamese, and Punjabi. The translations and in-context checking were carried out by Multicultural Management and Marketing (MMM).
For each language, the steps involved in translation included:
- briefing translators regarding the overall tone and messaging, the level of language to use, and general formatting requirements
- preparing text for translation (e.g., exporting the programmed questionnaire into Excel and Word documents for letter and email text)
- back-translations of all materials in one workflow
- independent checking by separate translators for each language
- in-language data collection program (online survey) set up
- typesetting (layout and formatting) of web content and printed materials
- final in-context checks of translated materials.
Both probability and non-probability panels were used to recruit participants for the survey.
Probability panel
Most of the sample (5,108 persons) was recruited through the SRC probability panel, Life in Australia™. Life in Australia™ includes Australian residents aged 18 and over, who are contactable via either a landline or a mobile phone, not including Australian external territories. Members of the panel are recruited via random digit dialling or address-based sampling.
A stratified random sample was drawn from Life in Australia™ panellists on strata defined by age (18–34, 35–44, 45–54, 55–64, 65+), gender, education (less than a bachelor’s degree, bachelor’s degree or above) and speaking a language other than English at home, with the sample being selected in proportion to the number of active panellists in each stratum.
Non-probability panels
Non-probability panels were used to oversample the hardest-to-reach populations to maximise inclusion from groups such as Aboriginal and Torres Strait Islanders, residents of the Northern Territory, those from remote or very remote Australia, and those who use languages other than English at home.
Multicultural Management and Marketing (MMM) provided translation services for the questionnaire and supporting information into five most common languages spoken other than English (ABS 2021a; traditional Chinese, simplified Chinese, Arabic, Vietnamese, and Punjabi). MMM recruited total 249 participants in the five languages using their own research panel, community networks and in-language social media.
People who reside in very remote Australia were sourced from the online panel provider i-Link Research, that has over 160,000 panellists. The survey was administered in English with a final sample size of 88.
Sample profile
The final sample profile is shown below in Table 1.
Subgroup | Life in Australia™ (%) | MMM (%) | i-Link (%) | Total (%) |
---|---|---|---|---|
Gender | ||||
Male | 41.8 | 57.4 | 38.6 | 42.4 |
Female | 57.4 | 42.6 | 61.6 | 56.8 |
Non-binary/ Other gender | 0.8 | 0.4 | – | 0.7 |
Age | ||||
18–24 years | 4.6 | 5.2 | 9.1 | 4.7 |
25–34 years | 12.5 | 33.3 | 14.8 | 13.5 |
35–44 years | 16.3 | 38.6 | 21.6 | 17.4 |
45–54 years | 16.5 | 14.1 | 13.6 | 16.4 |
55–64 years | 18.4 | 3.2 | 11.4 | 17.6 |
65–74 years | 20.3 | 4.8 | 26.1 | 19.7 |
75 years or more | 11.3 | 0.8 | 3.4 | 10.7 |
Unable to establish | 0.1 | – | – | 0.1 |
Education | ||||
Have not completed a qualification | 15.0 | 3.2 | 15.9 | 14.5 |
Certificate I and/or II Level | 2.8 | 4.8 | 11.4 | 3.0 |
Certificate III and/or IV Level | 15.4 | 11.2 | 27.3 | 15.4 |
Advanced Diploma and/or Diploma Level | 11.4 | 22.1 | 17.0 | 11.9 |
Bachelor Degree Level | 24.8 | 25.3 | 17.0 | 24.7 |
Graduate Diploma and/or Graduate Certificate Level | 10.1 | 16.9 | 8.0 | 10.4 |
Postgraduate Degree Level (incl. master degree, doctoral degree, other postgraduate degree) | 18.2 | 16.5 | 2.3 | 17.9 |
Location | ||||
New South Wales | 30.6 | 34.1 | 6.8 | 30.4 |
Victoria | 25.6 | 30.9 | – | 25.4 |
Queensland | 19.6 | 14.5 | 4.5 | 19.1 |
South Australia | 8.4 | 5.2 | 8.0 | 8.3 |
Western Australia | 9.8 | 10.8 | 15.9 | 10 |
Tasmania | 2.6 | 1.6 | 1.1 | 2.5 |
Northern Territory | 0.6 | 0.8 | 63.6 | 1.6 |
Australian Capital Territory | 2.8 | 2.0 | – | 2.7 |
Remoteness | ||||
Major cities of Australia | 73.6 | 94.8 | – | 73.4 |
Inner Regional Australia | 19.1 | 2.4 | – | 18 |
Outer Regional Australia | 6.5 | 2.4 | 45.5 | 6.9 |
Remote Australia | 0.5 | 0.4 | 14.8 | 0.8 |
Very Remote Australia | 0.2 | – | 39.8 | 0.9 |
Total number | 5,108 | 249 | 88 | 5,445 |
Subgroups with small numbers (e.g. non-binary and other gender) were grouped together in the table.
The completion rate represents completed surveys as a proportion of all members invited to participate in this survey. The overall completion rate for the Life in Australia™ survey was 73.3% (online population = 73.5%; offline population = 61.2%) (see Table 2). Completion rate data is not available for the MMM or i-Link Research sample.
Outcome categories | Total (n) | Total (%) | Online members (n) | Online members (%) | Offline members (n) | Offline members (%) |
---|---|---|---|---|---|---|
Invited to participate | 6,970 | 100.0 | 6,841 | 100.0 | 129 | 100.0 |
Completed interview | 5,108 | 73.3 | 5,029 | 73.5 | 79 | 61.2 |
Refusal and mid-survey terminations | 168 | 2.4 | 164 | 2.4 | 4 | 3.1 |
Non-contacts | 1497 | 21.5 | 1455 | 21.3 | 42 | 32.6 |
Other | 197 | 2.8 | 193 | 2.8 | 4 | 3.1 |
Completion Rate (%) | – | 73.3 | – | 73.5 | – | 61.2 |
The survey was conducted primarily as an online survey. Telephone interviewer-administered questionnaires were offered to those who could not or did not want to complete questionnaires online (approximately 2.5% of panellists).
Computer-Assisted Telephone Interviewing (CATI) fieldwork
Interviewer briefing
All interviewers and supervisors selected to work on the survey attended a two-hour briefing session, which focused on all aspects of survey administration, including:
- survey context and background, including a detailed explanation of Life in Australia™
- survey procedures and sample management protocols
- the importance of respondent liaison procedures
- strategies to maintain co-operation
- detailed examination of the survey questionnaire, with a focus on the use of pre-coded response lists and item-specific data quality issues.
After the initial briefing session, interviewers engaged in comprehensive practice interviewing. A total of 9 interviewers were briefed on the survey.
Fieldwork quality control procedures
The in-field quality monitoring techniques applied to this project included:
- monitoring (by remote listening) of each interviewer within their first three shifts, whereby the supervisor listened in to at least 75% of the interview and provided comprehensive feedback on data quality issues and respondent liaison techniques
- validation of 16% of the telephone surveys conducted via remote monitoring (covering the interviewers’ approach and commitment-gaining skills, as well as the conduct of the interviews)
- field team debriefing after the first shift and, thereafter, whenever there was important information to impart to the field team about data quality, consistency of interview administration, techniques to avoid refusals, appointment-making conventions, or project performance
- examination of ‘Other (specify)’ responses
- monitoring of timestamps for segments of the survey and overall time taken to complete the survey
- monitoring of the interview-to-refusal ratio by interviewer.
Contact methodology
Life in Australia™ members were contacted with an initial survey invitation via email and SMS (where available), followed by multiple email reminders and a reminder SMS. Up to five reminders in different modes (including email, SMS, and telephone) were administered during the July to August 2023 fieldwork period. Telephone non-response of online panel members who had not yet completed the survey commenced in the second week of fieldwork, with reminder calls encouraging completion of the online survey. Offline members with a valid mobile telephone number were sent a short SMS invitation that contained a link to the survey as well as a reminder SMS halfway through fieldwork. Life in Australia™ call procedures included:
- a six-call regime for the landline sample, with an upper limit of eight call attempts
- a four-call regime for the mobile sample, capped at four call attempts to avoid appearing overzealous in attempts to achieve interviews
- contact attempts were spread over weekday evenings (6:30 pm to 8:30 pm), weekday late afternoon/early evenings (4:30 pm to 6:30 pm), Saturdays (11 am to 5 pm), and Sundays (11 am to 5 pm)
- appointments available any time that the call centre is operational (weekdays between 9 am to 8:30 pm; weekends 11 am to 5 pm)
- an 1800 number to address sample member queries and support the response maximisation effort and the establishment of a respondent page on our website (with responses to frequently asked questions).
MMM contacted their panel of culturally and linguistically diverse members and distributed information about the survey around the community through word-of-mouth to complete the online survey in one of the five languages (Traditional Chinese, Simplified Chinese, Arabic, Vietnamese, Punjabi).
i-Link Research contacted their panel members residing in Northern Territory and very remote parts of Australia to complete the online survey in English.
All Life in Australia™ members were offered a $10 incentive to complete the survey. Members could also opt out of receiving an incentive. The incentive options were a:
- Coles / Myer gift card
- payment into a PayPal account
- charitable donation to a designated charity (Children’s Ground, Food For Change, Spinal Cord Injuries Australia).
All respondents recruited via i-Link Research received standard points based on their reward system for a survey of this length. Respondents recruited through MMM received no incentive for participating in the survey.
While the survey was available in five main languages used in Australia (Simplified Chinese, Arabic, Vietnamese, Traditional Chinese and Punjabi) as well as English, those who do not use these languages may have been excluded from taking part in the survey. Additionally, those from these cultural backgrounds without internet access would have been excluded from taking part in the survey.
Open-ended questions and back-coding of questions with an ‘Other (specify)’ option were undertaken by experienced, fully briefed coders. Outputs were validated in accordance with ISO 20252 procedures, using an independent validation approach.
Data quality checks for surveys completed online across all three panels (Life in Australia™, LOTE, remote) consisted of checks for:
- logic
- proportion of ‘don’t know’ and ‘refused’ responses
- speeding
- straight lining
- verbatim responses to open-ended questions.
All these indicators were used to determine respondent removal for poor data quality. Data quality indicators other than verbatim responses were used to identify potentially problematic cases. Generally, verbatim responses were decisive, with those indicating thoughtful engagement with the survey being kept and others being removed (e.g., nonsense responses like ‘asdfgh,’ non sequiturs, swearing).
Data quality is tracked for panel members over time and those with repeated issues are retired from the Life in Australia™ panel.
After these checks, four survey responses were removed due to poor data quality and are not counted toward the Life in Australia™ completion rate. Additionally, five survey responses were flagged with i-Link Research following quality checks and replacements organised.
The sample was designed to provide a random sample of the Australian population aged 18 and over.
The Dementia Awareness Survey consisted of three components that were combined for weighting purposes:
- A random (probability) sample of adults from Life in Australia™.
- A convenience (non-probability) sample of LOTE respondents who speak simplified or traditional Chinese, Arabic, Vietnamese, or Punjabi.
- A convenience (non-probability) sample of adults from very remote Australia or the Northern Territory.
The usual approach to weighting random (probability) samples is a two-step process that aims to reduce biases caused by non-coverage and non-response and to align weighted sample estimates with external data about the target population (Kalton and Flores-Cervantes 2003). First, base weights are calculated to account for each respondent’s initial chance of selection and for the survey’s response rate. Next, the base weights are adjusted to align respondents with the population on key sociodemographic characteristics (Särndal et al. 1992; Valliant et al. 2018).
The convenience (non-probability) samples used non-random mechanisms to recruit participants to the survey, which means that the usual probability (two-step) approach does not apply (Elliott and Valliant 2017). There are several methods for weighting convenience samples and making estimates from them (Valliant 2020) including the ’quasi-randomisation’ method used here. Cases from the convenience sample are matched to cases from the reference sample. Each case from the convenience samples is assigned the base weights of the matching reference sample case. For this survey, the reference sample was the probability cases from Life in Australia™.
The combined sample then had base weights for two groups – a probability-based one for Life in Australia™ cases and an estimated one the convenience cases. To derive the adjusted weights, consideration was given to the characteristics on which to align the base weights with the population. The choice of characteristics was guided by those most:
- different between the probability and convenience samples
- associated with the survey’s key questionnaire items
- different between the combined sample and the population.
The set of characteristics used to adjust the weights were state or territory of residence, language spoken at home, age group by highest education, gender and remoteness area. Australian adult population counts and percentages for those aged 18 years and over were obtained from Census 2021 TableBuilder (ABS 2021b) for this set of characteristics (Table 3). For the unweighted demographic data for the Dementia Awareness Survey respondents, refer to the supplementary data tables.
Base weights were adjusted using regression calibration (Deville et al. 1993), implemented in R (R Core Team 2022) using the survey package (Lumley 2021). For more information on the weighting of sample surveys, refer to Valliant et al. (2018).
Category | Benchmark Target (#) | Benchmark Target (%) |
---|---|---|
State or territory of residence |
|
|
New South Wales | 6,403,715 | 31.54 |
Victoria | 5,201,515 | 25.62 |
Queensland | 4,086,793 | 20.13 |
South Australia | 1,449,803 | 7.14 |
Western Australia | 2,150,234 | 10.59 |
Tasmania | 459,012 | 2.26 |
Northern Territory | 189,570 | 0.93 |
Australian Capital Territory | 359,967 | 1.77 |
Language spoken at home |
|
|
Speaks a language other than English | 4,901,335 | 24.14 |
Does not speak a language other than English | 15,399,274 | 75.86 |
Age group by highest education |
|
|
18-24 years | 2,234,139 | 11.01 |
25-34 years x Less than Bachelor degree | 2,087,909 | 10.28 |
25-34 years x Bachelor degree or higher | 1,682,319 | 8.29 |
35-44 years x Less than Bachelor degree | 1,977,498 | 9.74 |
35-44 years x Bachelor degree or higher | 1,590,276 | 7.83 |
45-54 years x Less than Bachelor degree | 2,186,785 | 10.77 |
45-54 years x Bachelor degree or higher | 1,117,226 | 5.5 |
55-64 years x Less than Bachelor degree | 2,270,012 | 11.18 |
55-64 years x Bachelor degree or higher | 784,591 | 3.86 |
65+ years x Less than Bachelor degree | 3,556,689 | 17.52 |
65+ years x Bachelor degree or higher | 813,163 | 4.01 |
Gender |
|
|
Man or male | 9,971,164 | 49.12 |
Woman or female | 10,329,445 | 50.88 |
Remoteness Area |
|
|
Major Cities of Australia | 14,654,020 | 72.19 |
Inner Regional Australia | 3,642,222 | 17.94 |
Outer Regional Australia | 1,642,975 | 8.09 |
Remote Australia | 224,531 | 1.11 |
Very Remote Australia | 136,860 | 0.67 |
Sources: Census 2021 (ABS 2021b)
The average absolute bias, defined as the absolute percentage point difference between the estimates and the benchmark proportions, was computed as an average across all available categories within each item. The closer this measure is to 0, the more similar the distribution is to the population.
The average absolute difference between the population and weighted estimates of the study are shown in Table 4. There are large biases in the non-probability boosts, which is caused by the specific groups targeted. They are either LOTE respondents who speak simplified or traditional Chinese, Arabic, Vietnamese, or Punjabi or they are adults from very remote Australia or Northern Territory.
The non-probability samples greatly over represent non-English speakers, but this is balanced out by Life in Australia™ which underrepresents this group. Both cohorts have an education bias, Life in Australia™ over represent people with Technical and Further Education (TAFE) qualifications while underrepresenting people who did not finish high school. The non-probability sample over-represents people with post-graduate qualifications, while under-representing those with a who did not have a TAFE or university qualification, regardless of whether they completed high school or not. When combined, the postgraduate and year 12 representation reduced, but some bias still exists around people with TAFE qualifications, which are still overrepresented. For remoteness, there are very few cases from Inner Regional Australia, but when combined with Life in Australia™, this bias is reduced to approximately 0. For LOTE, combining Life in Australia with the non-probability sample, results in minimal bias for LOTE.
Variable | Life in Australia ™ | Non-Probability Boosts | Combined |
---|---|---|---|
Age | 0.9% | 7.7% | 0.6% |
Birthplace | 5.0% | 33.2% | 3.1% |
Education | 6.3% | 10.8% | 6.4% |
Gender | 0.6% | 4.1% | 0.6% |
LOTE | 2.6% | 50.2% | 0.0% |
Remoteness | 0.4% | 6.5% | 0.0% |
SEIFA | 1.7% | 3.3% | 1.7% |
State | 0.2% | 3.3% | 0.0% |
Note: LOTE = Language other than English, SEIFA = Socio-Economic Indexes for Areas
Presentation of estimates
The report presents estimates derived from survey responses weighted to the appropriate Australian population. Proportions are shown as percentages rounded to one decimal place. All differences reported in estimates across groups are statistically significant at the 95% level of confidence unless specified otherwise.
Means and medians
In some cases, estimates are presented as medians as well as means. This has been done when there was a concern that the means may be skewed by outliers. As the mean is a summary of all data points, it will be distorted by very large outliers. In contrast, the median is simply a description of the mid-point of data – close to half of the responses will be below the median, and half will be above. As a result, the median is not affected greatly by a small number of outliers.
Throughout the report, medians are only used in cases where the mean was noticeably affected by outliers or a skewed distribution, or when this enabled comparison to other published data. All means and medians in the report have been indicated.
Degree of correlation
When reporting correlations, it is a perfect correlation when the correlation coefficient is ±1; strong correlation if the coefficient value lies between ± 0.50 and ± 1; medium correlation if the coefficient value lies between ± 0.30 and ± 0.49; small correlation if the coefficient value lies below ± 0.29; no correlation when the value is zero.
Significance testing
When comparing two different estimates, it is important to determine whether the difference is likely to reflect a true difference in the underlying population or whether it may be due to sampling error. This process is called ‘significance testing’. There are several variables that are used to calculate whether two estimates are significantly different – the size of the difference, the variability in the sample collected (which indicates the level of sampling error present), and the size of the sample. In this report, a difference is deemed to be statistically significant if the chance of seeing the observed difference under the null hypothesis was less than 5% (p <0.05).
All group differences in the survey are statistically significant at the 95% level of confidence (unless otherwise specified). If a difference is statistically significant, it has been marked with a ‘#’ symbol in the supplementary tables.
Sometimes, even large apparent differences may not be statistically significant. This is particularly the case where there are small sample sizes. Conversely, with a sufficiently large sample, small changes are more likely to be statistically significant.
ABS (Australian Bureau of Statistics) (2021a), Cultural diversity: Census, ABS Website, accessed 25 January 2024.
ABS (2021b), Counting Persons, Place of Usual Residence [Census TableBuilder], accessed 10 May 2023.
Annear MJ, Toye C, Elliott K-EJ, McInerney F, Eccleston C and Robinson A (2017) ’Dementia knowledge assessment scale (DKAS): confirmatory factor analysis and comparative subscale scores among an international cohort’, BMC Geriatrics, 17:168, doi:10.1186/s12877-017-0552-y.
Bartlett L, Doherty K, Farrow M, Kim S, Hill E, King A, Alty J, Eccleston C, Kitsos A, Bindoff A and Vickers JC (2022) ‘Island study linking aging and neurodegenerative disease (ISLAND) targeting dementia risk reduction: protocol for a prospective web-based cohort study’, JMIR Research Protocol, 1:11(3):e34688, doi: 10.2196/34688.
Deville J, Särndal C and Sautory O (1993) 'Generalized raking procedures in survey sampling', Journal of the American Statistical Association, 88(423):1013–1020, doi:10.1080/01621459.1993.10476369.
Elliott M and Valliant R (2017) 'Inference for nonprobability samples', Statistical Science, 32: 249–264, doi:10.1214/16-STS598.
Kalton G and Flores-Cervantes I (2003) ‘Weighting methods’, Journal of Official Statistics, 19(2):81–97.
Kim S, Eccleston C, Klekociuk S, Cook PS and Doherty K (2022) ‘Development and psychometric evaluation of the Dementia Public Stigma Scale’, International Journal of Geriatric Psychiatry, 37(2), doi:10.1002/gps.5672.
Lumley T (2021) Survey: Analysis of complex survey samples, http://r-survey.r-forge.r-project.org/survey/.
R Core Team (2022). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.
Särndal C.-E, Swensson B and Wretman J (1992) Model assisted survey sampling, Springer Verlag Publishing, doi:10.1007/978-1-4612-4378-6.
Valliant R, Dever J and Kreuter F (2018) Practical tools for designing and weighting survey samples, 2nd edn, Springer-Verlag Publishing, doi:10.1007/978-3-319-93632-1.
Valliant, R (2020) ‘Comparing alternatives to estimation from nonprobability samples’, Journal of Survey Statistics and Methodology, 8(2), 231–263, doi:10.1093/jssam/smz003.