• Print

About the 2016 survey

The National Drug Strategy Household Survey (NDSHS) collects information on alcohol and tobacco consumption, and illicit drug use among the general population in Australia. It is conducted every three years and the AIHW has been collating and reporting on these surveys since 1998 with the 2016 survey the 12th in the series. The survey was commissioned and funded by the Australian Government Department of Health.

The survey used a stratified, multistage random sample design. Estimates included in this report are based on the 23,772 survey responses received.

Scope and coverage

The scope of the 2016 NDSHS was the Australian residential population aged 12 years or older.

The 2016 survey sample was based on private dwelling households, so some people (such as homeless and institutionalised people) were not included in the coverage of the survey. This is consistent with the approach in previous years.

Most results presented in this report are based on the population aged 14 years or older (unless otherwise specified), as this allows consistent comparison with earlier survey results.

Methodology

The sample was selected by way of a stratified, multistage random sample. Locations within Australia were stratified by state and territory and part of state (15 strata in total—capital city and rest of state for each state and territory, with the exception of the ACT).The sample was formed by randomly selecting locations (statistical area level 1 in capital cities; for non-capital city areas, statistical area level 2 were first selected), then dwellings (a starting address within each location was randomly selected and the dwelling next door to this was then approached). Interviewers then made 3 attempts to establish face-to-face contact with the selected dwellings and the in-scope person who most recently celebrated their birthday was selected.

The selection and contact methodology is unchanged from 2013.

Population estimates were calculated by weighting responses to both account for respondent’s probability of selection and align population estimates with known population totals.

Mode effects

Selected individuals could choose to complete the survey via a paper form, an online form or via a telephone interview. The 2016 survey was the first time an online form was used—the 2013 and 2010 surveys consisted solely of a self-completion drop-and-collect method, and in earlier years, both computer assisted telephone interviews and face-to-face interviews were used.

It is possible that the tool (also known as the ‘mode’) that is used by a respondent could have an impact on the actual information provided, introducing a bias in the data and affecting comparability of data obtained via the different methods.

A total of 23,772 completed the 2016 survey. Of these, 18,528 (78%) completed on paper; 5,170 (22%) completed online and 74 (0.3%) completed via a telephone interview.

In 2016, respondents who elected to use the online form had different demographic characteristics (such as age and level of education) to respondents who used the paper form.

A respondent’s demographic characteristics affect their choice of completing a paper survey or an online survey and are also are known to affect the likelihood of reporting drug use. Therefore these demographic characteristics needed to be taken into account when assessing whether there is a mode effect.

Regression analysis, which controls for the known demographics of respondents, was used to test whether there could be a mode effect between the three collection modes used in 2016.

After adjusting for socio-demographic factors, significant differences in prevalence rates between the online and papers respondents were found in 4 out of the 9 variables studied.

The regression model suggests no significant difference between paper and online completion for paper and online forms for drinking status; lifetime risk and single occasion risk status; and recent use of methamphetamines and tranquillisers.

Estimates for smoking, cocaine, pain-killers/opiates and cannabis may have been impacted by a difference in the mode effect of paper and online forms (online respondents were less likely to be a daily smoker, or use cocaine, pain-killers/opiates or marijuana in the previous 12 months than paper respondents). This should be taken into account when comparing 2016 estimates with previous survey results.

Response rate

Overall, contact was made with 46,487 in-scope households, from which 23,772 questionnaires were categorised as being complete and useable. This represented a response rate for the 2016 Survey of 51.1%, which was higher than the response rates for 2013 and 2010 Surveys (49.1% and 50.6%, respectively).

Non-response bias and non-sampling error

Survey estimates are subject to non-sampling errors that can arise from errors in reporting of responses (for example, failure of respondents’ memories, incorrect completion of the survey form), the unwillingness of respondents to reveal their true responses and higher levels of non-response from certain subgroups of the population.

The estimation methods used take into account non-response and adjust for any under representation of population subgroups in an effort to reduce non-response bias.

A limitation of the survey is that the data are self-reported and people may not accurately report information relating to illicit drug use and related behaviours because these activities may be illegal. This means that results relating to illicit drugs may be under-reported. However, any biases are likely to be relatively consistent at the population level over time so wouldn’t be expected to have much effect on trend analysis. Legislation protecting people’s privacy and the use of consistent methodology over time means that the impact of this issue on prevalence is limited.

However, some behaviours may become less socially acceptable over time which may lead to an increase in socially desirable responses rather than accurate responses. Increases in media reporting stigmatising a drug may increase the tendency to under-report use (Chalmers et al. 2014). Any potential increase in self-reported socially desirable behaviours needs to be considered when interpreting survey results over time.

Sampling error

All proportions that are calculated from survey data are estimates rather than true population proportions. This means they have a margin of error due to only a sample of the population being surveyed. This is called sampling error.
There are different ways of measuring sampling errors associated with an estimate from a sample survey. The 2016 NDSHS uses both relative standard errors and margin of errors and are included in the supplementary tables.

Relative standard error

A standard error (SE) is the extent to which an estimate might have varied by chance because only a sample of persons was obtained. The relative standard error (RSE) is the SE expressed as a percentage of the estimate, and provides an immediate indication of the percentage of errors likely to have occurred due to sampling.

Results subject to RSEs of between 25% and 50% should be considered with caution and those with RSEs greater than 50% should be considered as unreliable for most practical purposes. Estimates that have RSEs of between 25% and 50% are marked in the supplementary table with *; RSEs between 50% and 90% are marked with ** and those with RSE greater than 90% have not been published. Only estimates with RSEs of less than 25% are considered sufficiently reliable for most purposes.

Margin of error

The Margin of Error (MoE) describes the distance from the population value that the sample estimate is likely to be within at the 95% level of confidence. This means that the "true" proportion for the entire population would be within the margin of error around the reported estimate 95% of the time.

Significance testing

When comparing two different estimates, it is important to determine whether the difference is likely to reflect a true difference in the underlying population or whether it may be due to sampling error. This process is called ‘significance testing’. In the key findings, a difference is deemed to be statistically significant if the chance of seeing the observed difference due to sampling error alone, was less than 5% (p <0.05).

All time series tables have been tested for statistically significant changes between 2013 and 2016 but not for other comparisons (such as between sex or age). All increases or decreases described in the key findings are statistically significant at the 95% level of confidence (unless otherwise specified). If a difference is statistically significant, it has been marked with a ‘#’ symbol in the online supplementary tables.

Sometimes, even large apparent differences may not be statistically significant. This is particularly the case in breakdowns of small populations because the small sample size means that sampling error is likely to have a larger effect on the estimates.

Weighting

All data are weighted for probability of selection which takes into account dwelling location and household size. Weights are adjusted to align age by sex population estimates with known population totals.

Presentation of estimates

Proportions are shown as percentages rounded to 1 decimal place when less than 20% and rounded to a whole number when greater than 20%. All data presented in the key findings and online tables have not been age-standardised.
Population estimates are calculated by applying survey prevalence rates to the relevant population count and were based on the June 2016 Australian Bureau of Statistics estimated resident population. Population estimates are shown to the nearest 100,000 or 10,000 in text, depending on the size of the estimate.

Questionnaire changes

To maintain maximum comparability, the 2016 questionnaire (1.4MB PDF) was similar to the 2013 version. Some refinements were made to ensure the questions remained relevant and useful. The major additions to the questionnaire were:

  • Demographics
    • Inclusion of ‘Other (please write in)’ in the sex question (DEMOG1).
  • Section A (Perceptions)
    • Changed the response option in questions A1, A2, A3 and A4 from 'Pain-killers/Analgesics/Opioids' to 'Pain-killers/pain-relievers and opioids’, and removed ‘Non-medical use of Other Opioids/Opiates (e.g. Morphine, Pethidine)’
  • Section D (Tobacco)
    • Removed ‘Battery Operated electronic cigarettes (e-cigarettes)’ from D26. There was enough policy interest in electronic cigarettes to warrant separate questions about their use.
    • Several new questions were added on electronic cigarettes, including frequency of use (D27), age first used (D28), reasons for using (D29) and where they were obtained (D30).
    • Added the words ‘in Australia’ to the question that asks about whether people have seen tobacco products which do not have the plain packaging/graphic health warnings (D31). Also added a time period to the question on how many packets of these tobacco products were purchased (D32).
    • Included a new question (D33) on the kind of outlet respondents purchased product that did not have the plain packaging with the graphic health warnings.
  • Section F (Pain-killers, pain relievers and opioids)
    • Pain-killers/analgesic section and the other opiates section were combined into the one section and the section and questions were reworded to Pain Killers/Pain-relievers and Opioids.
    • Paracetamol and aspirin were removed from the list of examples and were specifically excluded from the description of pain-killers, pain-relievers and opioids. The examples were updated in the description and only include opioid analgesics.
    • F11 was moved to after question F4 (renumbered as F4B) and response options were updated
  • Section K (Meth/amphetamine)
    • New question on all forms of Meth/amphetamine used in the last 12 months (K11B) was introduced.
    • For questions K11A/B/C, response code ‘Powder’ was changed to ‘Powder/speed’.
  • Section Q (Ecstasy)
    • Questions on the forms of Ecstasy used (Q10A – Ever used, and Q10B – Main form used) were introduced into the survey.
  • Section TT (Other Psychoactive Substances)
    • Section heading was updated from ‘Emerging Drugs’ to ‘Other Psychoactive Substances’. Question wording was updated to refer to ‘Other Psychoactive Substances’, and examples were also updated.
  • Section Y (Harms)
    • Two new questions about injuries or illnesses sustained while under the influence of alcohol or illicit drugs were introduced (Y19A and Y19B)
  • Section YY (Policy support)
    • Three new policy measures about electronic cigarettes use were added to the tobacco policy support question (YY2).
    • New policy measure about take home naloxone were included in the injecting drug policy support question (YY3).

Refer to the Supplementary table footnotes for selected questionnaire change caveats and other data quality issues.

Terminology

Alcohol risk definitions

The alcohol risk data presented in the key findings are reported against guideline 1 and guideline 2 of The Australian guidelines to reduce health risks from drinking alcohol released in March 2009 by National Health and Medical Research Council (see Box 1).

Guideline 1 is based on calculating the cumulative lifetime risk associated with multiple drinking occasions (NHMRC 2009). Therefore, to calculate lifetime risk, the number of standard drinks had by a person over the last 12 months was calculated and divided by 365. Those people whose average was greater than 2 drinks were considered to be lifetime ‘risky’ drinkers.

Unbranded and illicit branded tobacco

Illicit tobacco includes both unbranded tobacco and branded tobacco products on which no excise, customs duty or GST was paid. Unbranded tobacco (commonly known as chop-chop) is finely cut, unprocessed loose tobacco that has been grown, distributed and sold without government intervention or taxation (ANAO 2002).

Illicit branded tobacco products include overseas-produced cigarettes (or packets of smoking tobacco) designed to comply with packaging laws in countries other than Australia but which make their way into Australia, without payment of customs duty, for sale to consumers in Australia.

Licit drugs—illicit use

In the 2016 survey, as in the past, respondents were asked about their use of certain drugs that have legitimate medical uses—pain-killers/opioids, tranquillisers/sleeping pills, steroids, methadone/buprenorphine (termed 'pharmaceuticals') and meth/amphetamines. The focus of the survey and corresponding data are on the use of these drugs for non-medical purposes.

The term 'illicit drugs' in this report includes the following: illegal drugs (such as cannabis), pharmaceutical drugs (such as pain-killers, tranquillisers) when used for non-medical purposes (strictly an illicit behaviour), and other substances used inappropriately such as inhalants (see Box 2: Definition of illicit use of drugs for further details). Note that where each of these licit/illicit drugs is central to the analysis, it is their illicit use that is analysed.

Box 2: Definition of illicit use of drugs

'Illicit use of a drug' encompasses a number of broad categories including:

  • Illegal drugs—a drug that is prohibited from manufacture, sale or possession in Australia—for example, cannabis, cocaine, heroin and amphetamine type stimulants
  • Pharmaceuticals—a drug that is available from a pharmacy, over the counter or by prescription, which may be subject to misuse—for example, opioid-based pain relief medications, opioid substitution therapies, benzodiazepines, over-the-counter codeine and steroids.
  • Other psychoactive substances—legal or illegal, potentially used in a harmful way—for example, kava, synthetic cannabis and other synthetic drugs, or inhalants such as petrol, paint or glue (MCDS 2011).

Further information

Confidentialised unit record files (CURF) of the NDSHS are available to researchers through the Australian Data Archive at the Australian National University. The latest available CURF is for the 2013 NDSHS. The 2016 CURF is expected to be available by the end of 2017.

For more information contact the AIHW at aod@aihw.gov.au.


References

  1. ANAO (Australian National Audit Office) 2002. Administration of tobacco excise. Audit report No. 55, 2001–02 Performance Audit. Canberra: ANAO, Commonwealth of Australia. Viewed 2 April 2014.
  2. Chalmers J, Lancaster K, Hughes C, 2014. The stigmatisation of ‘ice’ and under-reporting of meth/amphetamine use in general population surveys: A case study from Australia. International Journal of Drug Policy. Vol 36, pp.15-24.
  3. Ministerial Council on Drugs Strategy (MCDS) 2011. The National Drug Strategy 2013-2015. Canberra: Commonwealth of Australia.
  4. National Health & Medical Research Council (NHMRC) 2009. Australian guidelines to reduce health risks from drinking alcohol. Canberra: NHMRC