RESEARCH
OUTCOME MEASURES
We use child-worn cameras, microphones, movement sensors and physiological sensors to track children’s behaviours within their environments in real time. As well as using this technology in our own research, we also offer these tools to industry partners and organisations for use in consumer and intervention research.
Traditional approaches to consumer and intervention research rely on pre-post measurements to test how participants are affected before vs after the intervention. This new ability to use child-worn wearables and neuroimaging during an intervention to capture immediate, moment-by-moment effects on children’s engagement, learning, social interaction, group dynamics and emotion regulation is a much more sensitive for detecting fine-grained effects.
Learn more about how we measure:
+ Language

MOOD & REGULATION
Emotion regulation: using machine learning classifiers applied to microphone and/or heart rate data we can automatically identify all of the naturally occurring moments of child distress during the day. Then, we can examine how quickly children recover following each episode. In this way we can measure how a child’s emotion regulation capacity is affected by specific features of their environment.
Positive mood: using machine learning analyses applied to video or microphone data we can measure the emotional valence of child speech (i.e. how happy or sad they sound) .
Quality of emotional expression: by applying automatic transcription and semantic topic modelling to the microphone data we can track the frequency of emotional content in children’s naturally occurring speech.

LANGUAGE
Language production: using machine learning analyses and speech transcription on the child-worn microphone data we can track how the frequency and complexity of children’s spontaneous vocalisations varies between different settings.
Language comprehension: using physiology we can measure how attentive children are to vocalisations directed towards them and using microphone data we can measure how many of the vocalisations directed towards them they respond to with their own vocalisations.
Language comprehension - brain measures: with brain measures we can also track how responsive children’s brains are to language that they hear.
Semantic topic analysis: using automatic transcription and semantic topic modelling we can track the occurrences of particular semantically related clusters of words in children’s naturally occurring speech.
Noise: from the microphone data we can also automatically how noise levels and the clarity of speech that children hear and produce varies between settings.
ATTENTION & ENGAGEMENT
Visual objects and scenes: using machine learning applied to the head camera data we can track how, when and how often particular objects appear in the child’s visual field.
Attention: we can calculate the durations of attention episodes - i.e. how many seconds children remain engaged with an object or task once they have started it.
Types of interactions/actions: what types of interaction does the child have with objects in their environment - is it physical play (and if so what type), imaginative play, social play or solo play?
Attention - brain measures: by using brain measurements we can track the quality of children’s attention (i.e. are they ‘just looking’ at someone while day-dreaming, or really engaging) by tracking neural markers of child attention engagement.

PHYSICAL DEVELOPMENT
Movement/physical activity: using machine learning applied to movement sensors we can track how long children spend sitting, crawling, walking, and how many steps they take.
Sleep: using sleep sensors we can measure how the duration and quality of children’s sleep varies over time.
Stress: using commercially available equipment we can track how the long-term levels of stress biomarkers such as salivary cortisol vary over time.
Learn more about our…
SOCIAL DEVELOPMENT
Social responsiveness: we can measure children’s physiological responsiveness (i.e., how attentive they are to vocalisations directed towards them) and their behavioural responsiveness (i.e., how many of the vocalisations directed towards them they respond to by vocalising themselves) - using machine learning analyses applied to the microphone data.
Affect coordination: by tracking mood from children’s facial expression and vocalisations we can measure how closely associated each child’s mood is with the average mood of the group.
Semantic coordination: using semantic topic modelling we can track how semantically related a child’s vocalisations are to the vocalisations from their play partners or adults.
Intra-group dynamics/synchrony: using brain measurements, we can also intra-group dynamics/synchrony by measuring how well coordinated the brain and attention states of the group are.