
Immanence Engine
This research presents the Immanence Engine, a generative audiovisual system designed to operationalise identity under conditions of augmented capitalism. Rather than treating identity as a representational construct, the system models it as an emergent, computational process shaped by the interaction of perception, classification, and feedback.
The system begins with unstructured video data, which is processed by a pre-engine referred to as the Aesthetic Parser. This module segments footage temporally and classifies it into three categories—baseline, weird, and eerie—based on low-level visual features such as brightness, variance, and edge density. These classifications are not pre-given, but produced computationally, generating both a structured dataset and an annotation framework. In this way, the system establishes the conditions under which audiovisual material becomes legible.
The core of the system lies in its real-time computation of identity. This is achieved by combining two forms of input: immediate perceptual features extracted from video frames, and the distribution of classifications generated by the parser. These inputs are fused through a weighted model, producing a dynamic set of identity weights corresponding to the three categories. Identity is therefore not fixed or representational, but continuously recalculated as a function of both present input and accumulated structure.
To stabilise this process, the system introduces a phase model in which identity states are interpreted as behavioural conditions. Depending on the dominance of particular weights, the system transitions between phases such as balanced, baseline dominant, weird dominant, and eerie drift. Temporal smoothing ensures that these transitions are stable, preventing rapid oscillation. Once established, the current phase feeds back into the system, influencing subsequent identity formation and introducing a degree of behavioural continuity.
The system outputs a continuously evolving audiovisual composition generated through the weighted blending of video streams. Importantly, this output is accompanied by a visualisation layer that exposes internal processes, including identity weights and phase states. This makes the system’s operations legible, allowing identity to be understood as a computational process rather than a fixed representation.
Conceptually, the Immanence Engine aligns with Guy Debord’s account of mediated social relations, in which images structure social experience, and extends Mark Fisher’s argument that critique must operate within the systems it engages. By embedding classification, evaluation, and feedback within a single computational framework, the system demonstrates how identity is continuously produced and reconfigured under platform conditions.
As both a conceptual model and a practical implementation, the Immanence Engine offers a method for engaging artificial intelligence not as an external object of critique, but as a site in which critique can be enacted through operation.
