Friday, 19 September 2025

Mental Health Metadata; predictive policing & digital phenotyping in Australia. [part 1]

 Mental Health Metadata, 

Predictive Policing and Digital Phenotyping : Part 1 


  (Learning from Palantir 


It has long been known that Mental health care in Australia, and particularly in New South Wales (NSW), has been repeatedly described as fragmented, underfunded, and difficult to navigate (Productivity Commission, 2020). 


The system is crisis-oriented, prioritising acute interventions for individuals experiencing severe distress, suicidality, homelessness, or post–domestic violence trauma, while neglecting early intervention and long-term recovery needs (Rosenberg & Hickie, 2019). 


This fragmented approach exacerbates imbalance and expolits inequality, especially among those with chronic or complex conditions who require sustained psychosocial support (Meurk et al., 2016).


The medical model, long grounded in the DSM-5 diagnostic framework, has been criticised as reductive and inadequate for addressing the social determinants of mental health (Horwitz, 2021). 


Many scholars and service users have called for a shift toward holistic and recovery-oriented approaches that better integrate psychological, social, and cultural factors into care (Slade et al., 2014). 


Yet, instead of strengthening mental health funding and services, significant investment has been diverted toward data and digital infrastructures, including health metadata collection, electronic record systems, and digital surveillance technologies (Wykes & Schueller, 2019).


I highlight one specific area of concern recently illucidated in my research, this is around the use of body warn cameras (BWCs) used by police, and their role in response to mental health crisis.  


I want to also illuminate the use of the policing surveillance system utilised in mental health metadata in NSW, and likely, Australia as a whole. 


In NSW, BWC footage is integrated into the COPS (Computerised Operational Policing System) database, where it is tagged with metadata such as “mental health,” ”self harm” “involuntary patient under the mental health act” “suicide attempt,” or domestic violence” (NSW Police Force, 2022). 


While originally intentended for accountability and service evaluation, this metadata also now serves to function as a novel but foundational form of “mental health surveillance.” 


These entries once indexed, provide elementary data that construct a digital trace of individuals in crisis, regardless of whether they have committed a crime. 


This raises integral ethical questions about stigma, privacy, and the permanence of mental health records (Carpenter & Hayes, 2019).


From a psychological perspective, the risks of such systems lie in algorithmic bias and predictive policing


Metadata generated through BWCs and logged in policing systems may be used to train machine learning algorithms for risk flagging and predictive modelling. 


However, these algorithms will inherit the biases of the original data inputs and encode the individual who input the data.



For example, distressed behaviours such as dissociation, autistic meltdowns, or trauma flashbacks may be misinterpreted as aggression or risk of violence by poorly trained frontline officers (McKernan, 2021). 


In the input of metadata there is limited clinical nuance. 


Without contextual understanding, automated metadata generation risks mislabelling such events, thereby amplifying stigma.


The problem is compounded by the limited mental health training received by frontline police. 


In NSW, most officers complete no more than a short, optional Mental Health First 

Aid course, typically around eight hours in duration (Herrington & Pope, 2014).this increases the likelihood of individual cognitive errors, mislabelling, and discriminatory practices.


They have inherent stigma and bias and often unconsciously due to work related pressure and stress take that out on mentally ill patients who present to the emergency department (whether voluntary or involuntary) under the Mental Health Act (2007). 


People with qualifications are individuals have spent years studying and training to qualify and begin work in the field and understand the brain; behaviour and emotions of individuals in crisis states.  (van der Kolk, 2015; Kerns et al., 2015). 

But to this, dissociative episodes, autistic meltdowns, or trauma flashbacks symptomatically present as surface level agitation or aggression, yet are manifestations of distress rather than criminal intent. 

Consequently, subjective interpretations and cognitive biases can be encoded into metadata, such as labelling a trauma-related emotional flashback as “suicidal” or “violent” (McKernan, 2021).


Sociologically, this creates a feedback loop: flagged metadata that reinforces stereotypes of individuals in crisis as “high risk,” leading to more frequent and forceful police interventions. 


Utilising Social identity theory (Tajfel & Turner, 1979) provides insight into why marginalised populations are disproportionately affected. 


Police officers, operating under institutional norms and stress, categorise individuals into “ingroup” (law-abiding citizens, compliant patients) and “outgroup” (non-compliant, mentally ill, non-conformist, difficult). 


These classifications trigger stereotypes that guide perception and data input, creating stigma towards minorities like  Aboriginal and Torres Strait Islander peoples (ATSI), survivors of trauma, people with addiction, and those with complex mental health histories; more likely to be placed in the outgroup. 


Once categorised, they are subjected to disproportionate surveillance, restraint, and detention. 


The metadata system amplifies this bias by encoding outgroup status into digital records, ensuring that stigma is perpetuated across institutions. 


This reflects the broader sociological process of systemic discrimination, where technological infrastructures reproduce existing power imbalances under the guise of objectivity. 


These dynamics are particularly dangerous for marginalised populations, including Aboriginal and Torres Strait Islander peoples, survivors of sexual violence, and those with complex trauma histories, who are already disproportionately subject to coercive interventions such as chemical restraint and involuntary detention (Dudgeon et al., 2016; Gooding, 2017). 


The retraumatising potential of such practices such as being physically restrained or injected during a flashback, cannot be overstated or emphasised enough in clinical terms (van der Kolk, 2015).


At the same time, researchers have pointed to the possible use of BWC footage for service audits, training, and bias detection (Lum et al., 2019). 


Yet without rigorous ethical frameworks and oversight, these same technologies risk entrenching stigma, criminalising mental illness, and expanding surveillance into health and justice systems (Tufekci, 2015).


The integration of artificial intelligence (AI) into predictive policing carries significant risks when applied to mental health contexts. 


Automated tagging systems can scan body-worn camera (BWC) footage for features such as “weapons visible,” “raised voice,” or “self-harm gestures.” 


However, such identifications often lack clinical nuance. 


Many staff on the frontline such as nurses, paramedics, police and frontline hospital staff have little to no training in mental health, trauma informed care or recovery models. 


It is necessary to note that training to work with patients in the field of human behaviour. 

It requires years worth of clinical training in theories of developmental psychological models, human behaviour, psychopathology, neurocognitive psychology frameworks followed by rigorous practical work that only licensed clinicians who are certified to work in the field of mental health and trauma are nuanced enough to understand, and retain titles such as; psychiatrist, psychologist, clinical psychologist, psychotherapist, counsellor, social worker and or mental health case worker. 


For our police, paramedics and nurses: This minimal preparation leaves officers ill-equipped to differentiate between psychiatric crisis, neurodivergence, and trauma reactions. 


In predictive policing frameworks, such mislabelling may feed into risk profiling systems. 


Individuals appearing repeatedly in flagged BWC clips may be placed on “high-risk” lists, reinforcing the association of mental illness with criminality (Lum & Isaac, 2016). 


These risk markers are derived from biased or incomplete metadata, 


The consequence is the outcome is an algorithmic feedback loop: more flagged footage generates higher risk scores, prompting heavier policing responses at future callouts (Brantingham, 2017).


This dynamic is particularly concerning in light of the trauma histories prevalent among mental health service users. 


Many individuals in crisis have experienced adverse childhood experiences (ACEs) or complex trauma, meaning that coercive responses such as physical restraint, handcuffing, or forced sedation:

risk retraumatisation (Dudgeon et al., 2016; Bloom & Farragher, 2013). 


When officers are briefed that “prior footage shows aggression when restrained,” 

the likelihood of escalation increases, as police anticipate hostility rather than approaching with de-escalation strategies. 


In psychological terms, this represents a self-fulfilling prophecy:

we see algorithmic predictions shape officer behaviour, which in turn provokes the very outcomes used to justify the predictions.


It also illustrates Merton’s (1948) concept of the self-fulfilling prophecy explains how expectations influence behaviour in ways that bring about their own confirmation. 


In the policing of mental health crises, once an individual is flagged as “aggressive” or “high risk” in metadata, officers approach with heightened vigilance and coercive strategies.


From a systems perspective, this constitutes a hostile feedback loop


Now metadata-driven profiling escalates forceful interventions, which then generate further negative metadata, deepening the cycle of bias and stigma (Tufekci, 2015). 


Instead of reducing risk, predictive policing in this form risks discriminatory practices against people with mental illness, especially those from already marginalised groups. 


This dynamic demonstrates how predictive policing enforces retraumatisation into the system.


Globally, similar debates are unfolding in the emerging field of digital phenotyping, which uses real-time digital data from smartphones, wearables, and sensors to monitor mental health states. 


Systematic reviews have demonstrated its feasibility in detecting early symptoms of depression, anxiety, and psychosis (Mohr et al., 2017; Rohani et al., 2018; Dogan et al., 2022). 


However, methodological inconsistencies, privacy concerns, and the risk of over-surveillance remain unresolved (Insel, 2017; Cornet & Holden, 2018). 


In Australia, the Black Dog Institute has begun to explore these methods, but the ethical implications for consent, data storage, and long-term social consequences remain concerning (Torous & Roberts, 2017).


Put together, NSW mental health policy illustrates a incongruity.


The real time mental health services remain under-resourced and inaccessible for many, increasing resources are allocated to data-driven surveillance and metadata systems. 


From a psychological and social science standpoint, this shift risks reinforcing stigma, exacerbating inequity, and embedding systemic bias under the guise of innovation. 


What is urgently required is a rebalancing toward community-based, trauma-informed, and recovery-oriented approaches that prioritise human connection over data extraction.



Digital phenotypjng


Globally, the field of digital phenotyping is emerging:


This is the use of smartphone, wearable, and sensor data to monitor mental health states, (or health states) ; a phenomena that has gained significant momentum. 

In mental health my informal literature analysis shows a recent systematic review synthesising 5,422 articles identified 74 key studies, illustrating how passive and active digital data can reveal behavioural and psychological patterns relevant to mental health symptoms (Mohr et al., 2017; Cornet & Holden, 2018). 


In nonclinical populations such as students and employees, digital phenotyping has been found feasible for early detection of stress, anxiety, and mild depression, although challenges remain around self-reporting bias and methodological variability (Insel, 2017; Dogan et al., 2022).


A review published in PLOS Digital Health mapped 29 studies demonstrating the potential of digital phenotyping to longitudinally track mental health status across populations (Huckvale et al., 2019). 


Within affect and psychotic spectrum illnesses such as schizophrenia and bipolar disorder, 51 studies, with many incorporating cognitive based machine learning: have applied passive phenotyping to identify relapse risk and symptom changes (Rohani et al., 2018). 


In Major Depressive Disorder (MDD), a systematic review of 24 studies with almost 10,000 participants indicated moderate accuracy in foreshadowing mood states from smartphone data (Saeb et al., 2016). 


Across bipolar disorder and MDD selections, behavioural markers such as GPS mobility, sleep rhythms, smartphone usage, and activity patterns correlated vigorously with mood fluctuations and treatment response (Faurholt-Jepsen et al., 2019).


Technical reviews have highlighted major standardisation challenges in the field, noting inconsistencies in data sources, sensor features, and analytical methods across the 112 reviewed studies (Cornet & Holden, 2018). 


To address these gaps, initiatives such as Singapore’s scoping review protocol are mapping a sweeping expanse of digital data sources, including monitoring participants online activity and social media profiles, for mental health prognosis (Wang et al., 2022). 


At the platform level, JTrack has emerged as an open-source smartphone application designed to securely monitor behavioural biomarkers while complying with stringent privacy frameworks such as GDPR (Reichert et al., 2021). 


In Australia, the Black Dog Institute is exploring digital phenotyping as a tool to detect early warning signs and understand behavioural pathways in mental illness (Nicholas et al., 2022). 


In the UK, a large-scale study involving over 10,000 participants integrated wearable data (Fitbit) with self-reported measures of depression and anxiety, using machine learning to construct predictive profiles with moderate success rates (Faurholt-Jepsen et al., 2019).


While these outcomes suggest new prospects for early intervention and personalised care; they also raise immense ethical concerns around consent, data governance, privacy and surveillance. 



On a side note:

I remain objective as I write this from an academic and pragmatic lense.

 But you know what my intuitive and emotional reasoning is screaming at me? 

That's why I enjoy this type of research because my intellectual and cognitive reasoning and research emphasises the facts to pull through and find ****real truth*** to ***real life*** problems and not get caught up in speculation, emotive jargon and intangible, incohesive nonsensical rubbish. 


So yes I should, and so should you, feel these concerns are definitely a pressing cause for concern when digital phenotyping concepts are mapped onto existing metadata practices through NSW policing and mental health. Even more so when we examine the persistent mental health issues and lack of resources and funding in Australia. 


** 


More into the predictive policing side of things, I was close with someone who worked for DCJ for years. (Department of Community and Justice).


I have no issue articulating my “insider scope” he desperately denied me doing for years. 


I don't name the source but I have no shame in revealing my intel.  


Lets dive deeper into the depths of what goes down via the systems within that area of data collation. 


In NSW, data infrastructures such as COPS, QPRIME, and LEAP capture “mental health metadata” through event codes (e.g., “Mental Health Act detention,” “welfare check,” “self-harm threat”), crisis response logs (time, location, outcome), and PACER notes (triage outcomes, medication disclosures, admission decisions). 


Body-worn camera (BWC) footage is indexed against these records, allowing cross-system linkage between policing, health, and then into the judicial databases (NSW Police Force, 2022). 


Additional metadata fields, such as “weapons history,” “violent when intoxicated,” or “repeat consumer,” further construct a digital trajectory of an individual’s mental health, regardless of formal clinical diagnosis. 


Court-use summaries, such as “X has had seven PACER attendances in six months,” reinforce these digital shadows, which embed a narrative of risk even in the absence of real criminality or laws that are broken which encompass a potential targetting situation despite the subject not being a criminal offender (Carpenter & Hayes, 2019).


When juxtaposed with global digital phenotyping research, the NSW system illustrates how metadata can drift from health monitoring into predictive policing and digital algorithmic profiling


Just as passive digital markers (e.g., GPS mobility, smartphone usage) can predict mood shifts, police metadata (e.g., frequency of crisis call-outs, prior “mental health history”) can function as behavioural flags. 


However, unlike clinical research contexts governed by traditional ethical review boards, NSW’s metadata ecosystem is a novel and unknown concept that lacks the oversight that conventional research in human behaviour ( after World War two) has been regulated to abide by.  


As a result this digitial data risks encoding stigma, retraumatisation, and creates systemic bias. 


For example, a survivor of sexual trauma experiencing dissociation may be misinterpreted as “violent” or “suicidal” and subsequently subjected to coercive intervention, with such incidents recorded and perpetuated in metadata profiles (Gooding, 2017; van der Kolk, 2015).


This conjunction accentuates the double-edged nature of digital phenotyping. 


On one hand, researches stipulate how it represents a advantageous avenue for progressing personalised and preventive mental health care. 


On the other, when pertained to through unregulated coercive and punitive systems such as policing, or surveillance through unethical systems, it risks entrenching discrimination, stigma, inequality and transforming human distress into enduring digital risk identifications. 


Predictive policing imports the logic of surveillance into health contexts, amplifying stigma, retraumatisation, and inequality. Observations of an individuals distress is misread at the surface level; with limited awareness of the neurobiological events that occur with a tryama response (HPA axis dysregulation); and sociologically, bias is encoded into systems; while psychologically, power, imbalance, discrimination and coercion undermines recovery.


while community-based, trauma-informed services remain underfunded, state and federal governments continue to direct millions; sometimes billions of dollars, into metadata infrastructures and artificial intelligence, driven predictive technologies. 

This allocation of resources sidelines human-centred frontline care in favour of systems that prioritise surveillance and risk management over recovery and wellbeing. 

The consequence is a dangerous reduction of complex human suffering into algorithmic signals, a deduction of person centered care into  binary classifications of 0s and 1s.

These remove nuance and complexity that pertain to humanity and perpetuate discriminatory feedback loops.

By enabling technological oversight over immediate human engagement, surveillance, predictive policing and as a rule of thumb, governments directing social policy;  entrench systemic injustices and risk re-traumatising those already marginalised. 


Instead, investment should be redirected toward evidence-based models of recovery that emphasise human interaction, therapeutic alliance, the recovery model; and community support approaches consistently shown to foster healing and resilience in ways no algorithm can replicate.

Ones that are research based and not experimental technocracy built on the artificial intelligent dawn of this new age.


What is urgently required is a rebalancing: away from coercive data extraction and toward trauma-informed, recovery oriented, and community-based care models. 


This means prioritising clinicians trained in neurobiological and psychological frameworks, embedding social awareness of stigma and power, and ensuring that digital tools are subject to rigorous ethical oversight. If we are going to use the digital age in mental health care we should first make sure the foundational systems of physical and practical groundwork and support models for community and public resources are first prioritised and that hasn't happened yet.

When the resources have implemented real time mental health infrastructure and services; I.e places where people can attend to receive support, funding for services to be delivered in real time and real places, before the monitoring of phones, biometrics and digital phenotyping that is invasive and often; i hypothesize will lead to further backlogging in the mental health system that will create more demands on an already defunct infrastructure, we need to ensure there is a stable foundation. 


Without such safeguards, predictive policing and digital phenotyping risk transforming human distress into permanent digital risk identities, perpetuating trauma rather than healing it.


In order to effectively build and create and effectively make a space that is safe and healing, a foundation must be built. That requires a blueprint, Australias  Health system is already in crisis. To over fund digital surveillance of mental health metadata while ignoring the public mental health infrastructure isn't just ignorant, its down downright stupid, reeks of limited intelligence and demands inquiry into who is calling the funding into the research, why and what is happening on all levels.  


But I suppose the implications of this trajectory become clearer when examined internationally; for instance, in Palantir’s predictive policing experiment in New Orleans, where algorithmic profiling was embedded into local governance with profound consequences for civil liberties.


Palantir’s predictive policing experiment in New Orleans illustrates the dangers of embedding algorithmic profiling into local governance under conditions of secrecy and limited oversight. 


Palantir was a software formed via partnerships with New Orleans Police Department. 

Palantir deployed data-mining software that integrated arrest records, social networks, and other public data to generate risk profiles and forecast “likely offenders.” 


While framed as innovation, the program operated without public knowledge or informed consent, raising serious civil liberties concerns. 


Communities already subject to over-policing; particularly African American residents, were disproportionately targeted, reinforcing existing racial and socioeconomic inequalities. 


Rather than enhancing safety, the system amplified surveillance, entrenched stigma, and blurred the boundary between law enforcement and social governance, demonstrating how predictive policing can replicate structural bias under the guise of neutrality.


So this Palantir scenario, is exactly what I fear is going to happen to individuals with mental illness in Australia. 


I suppose if we look at the mental health metadata and digital phenotyping and the use of body camera footage being used to target mental health call outs in NSW; we are already living in a dystopian version of Palantir. But instead of targeting an ethnic minority they are targeting emotionally, mentally and traumatised people who are sometimes on the brink of suicide or in severe psychosis or dissociative states when police intervene. 


Let’s dive into the digital shadow this digital phenotyping  via mental health metadata creates and provide a further deep dive into Palantir. 


With the digital ID now required for social media, it’s important to remember this is super important to remain critical and aware of.   


No comments: