Interpreting OSR data

It is important that the following issues be considered when interpreting the Online Services Reporting (OSR) data presented in this report.

General notes

OSR data are collected using a web-based reporting tool designed to capture information about organisations funded by the Australian Government to provide health services to Aboriginal and Torres Strait Islander people. Data for a small number of non-Indigenous clients attending these services are also included.

A characteristic of the collection is that the number of organisations submitting data changes slightly each year. While for the most part, it is the same organisations contributing to the collection, the number of organisations may change due to changes in funding, auspicing or reporting arrangements at the local level.

Another characteristic of the collection is that the organisations submitting valid data for a particular data item may change over time. This is because data with remaining quality issues at the cut-off date for each collection are excluded from national analyses. This means that in each year some organisations' data are partially accepted rather than fully accepted. Both the changing organisations in scope for the collection and the changing number of organisations with valid data for each data item may impact time series analyses. In this report, time series analyses are based on all the organisations that provide valid data in each year, rather than on a subset which have valid data over all years.

All OSR data in this report exclude that from organisations receiving funding only for maternal and child health services.

Changes to collection in 2018–19

In 2018–19, the OSR collection underwent significant change and was scaled back to include only ‘core’ items. Items dropped include the substance use and social and emotional wellbeing modules, and the services provided and cultural safety items. Plans are underway to reintroduce key items in a staged approach over the next few years.

Also, collections prior to 2018–19 had maternal and child health (MCH) questions in a separate module to preventative health (PH). In 2018–19 these were combined but the data range MCH services were required to report was only focused on what they received MCH funding for, not through all types of PH services the health organisation offered.

Episodes of care

While the collection and validation processes for most years have been similar, episodes of care data for 2016–17 are not comparable with other years because changes were made to the types of contacts that were counted as an episode of care and to how episodes of care were defined and recorded within some clinical information systems. This meant some contact types (for example, health care delivered over the telephone and hospital-related contacts) were excluded from the episode of care count in some organisations. These changes resulted in an expected decrease in episodes of care counts in 2016–17. There was however also an unexpected decrease in episode counts in a few organisations using Medical Director (MD), where some clinical contacts were not counted in their episodes of care data as they should have been. These led to lower numbers of episodes of care recorded and potential undercounts for some services in 2016–17. In 2017–18, these contact types were again included in the episodes of care count and the extraction issues around episodes of care counts were resolved.

Data quality and exclusions

In 2018–19, by the final cut-off date for submissions, most organisations (98%) had provided data that could be included in national analyses. The remaining 2% (5 organisations) had a total of 10 data items excluded from national analyses due to remaining data quality issues (some organisations had more than 1 item excluded). Exclusion rates vary by data item.

Common data quality queries received during data submission were around incomplete or inaccurate data (for example, workforce positions were not reported or were reported in terms of the number of people rather than full-time equivalent positions); data discrepancies between two or more questions (for example, the number of clients exceeded the number of episodes of care); and large increases or decreases in data compared with previous submissions. Where significant data quality issues remained after follow-up, then these data were excluded from national analyses.