• Print

The National Drug Strategy Household Survey (NDSHS) is conducted every three years and the AIHW has been collating and reporting on these surveys since 1998. The latest survey was conducted in 2016, from July to November 2016. It was the 12th survey in the series and first results are expected to be release in mid-2017. The latest available data is for 2013. The survey was commissioned and funded by the Australian Government Department of Health.

Scope and coverage

The scope of the NDSHS is the Australian residential population aged 12 years or older. In earlier surveys, 2001 and earlier, data were collected from the residential population aged 14 years or older. Households are selected by a multistage, stratified area random sample design. The sample is based on private dwelling households, so some people (such as homeless and institutionalised people) were not included in the survey. The respondent was the household member aged 12 years or older (or 14 years or older) with the next birthday. Most results are based on the population aged 14 years or older (unless specified), as this allows consistent comparison with earlier survey results.

Methodology

In 2010 and 2013, the NDSHS was solely conducted using a self-completion drop-and-collect questionnaire method. In 2004 and 2001, computer assisted telephone interview (CATI) were used in addition to the drop-and-collect survey, and prior to 2001, face-to-face interviews were also used.

The 2016 survey was the first iteration to use a mixed-mode collection and offer respondents a choice of completion options-paper form, online or via telephone. Respondents were still recruited in the same way as earlier survey-households' randomly selected, interviewer approaches each household and attempts to make contact with the selected respondent three time. For respondents that elected the paper form, the interviewer would arrange a time to collect the survey from that household (drop-and-collect). For respondents electing the online survey, the interviewer would call or send and SMS to the respondent to remind them to complete the survey.

Roy Morgan Research has been responsible for conducting the fieldwork since 1998.

Non-response bias and non-sampling error

Survey estimates are subject to non-sampling errors that can arise from errors in reporting of responses (for example, failure of respondents' memories, incorrect completion of the survey form), the unwillingness of respondents to reveal their true responses and higher levels of non-response from certain subgroups of the population.

A limitation of the survey is that people may not accurately report information relating to illicit drug use and related behaviours because these activities may be illegal. This means that results relating to illicit drugs may be under-reported by some people. Legislation protecting people's privacy and the use of consistent methodology over time means that the impact of this issue on prevalence is limited.

Weighting

Survey weighting is a process that minimises biases in samples so that results better represent the population of interest, thereby increasing the reliability of national, state and territory and regional estimates. All data in the NDSHS are weighted for probability of selection which takes into account dwelling location, household size, and the age and sex of the respondent.

Presentation of estimates

Population estimates are calculated by applying survey prevalence rates to the relevant population count and are based on the Australian Bureau of Statistics estimated resident population for the year the survey was conducted. Population estimates are rounded to the nearest 100,000 or 10,000, depending on the size of the estimate.

Relative standard error

Estimates that have relative standard errors (RSE) greater than 50% are marked with ** and those with RSEs of between 25% and 50% are marked with *. Results subject to RSEs of between 25% and 50% should be considered with caution and those with RSE greater than 50% should be considered as unreliable for most practical purposes. Only estimates with RSEs of less than 25% are considered sufficiently reliable for most purposes.

Significance testing

All time series tables have been tested for statistically significant changes for the most recent two survey waves, but not for other comparisons (such as between sex or age). 'Significant' means 'statistically significant' and is indicated with a # for significant decrease or increase. The difference is statistically significant if the z-statistic of the pooled estimate of the two rates being compared is more than 1.96 or less than -1.96 (a 5% two-tailed test).