“this place was never empty...”


grad school project
creative tech
term 4
spring 2025


porfessor

Maxim Safioulline
Tools

Touchdesigner

deliverables

interactive installation
poetry zine
overview
 
Set in Pasadena’s Arroyo Seco Park, this is a sensory reflection on bodies, memory, and the traces we leave—both human and nonhuman. The project consists of a gestural interaction prototype built in TouchDesigner using 3D scans of the park, and a zine of poems exploring nonlinear time.

Through the digital interface, users “speak” to a reconstructed landscape not with words, but with gestures—raising the question: can technology truly understand our embodied intentions? What might it mean for emotion to be expressed in nonverbal ways, and how might a virtual space respond?

In the end, this project became a way to think about how people and places remember each other—and how even the emptiest-looking spaces are full of traces, if you learn how to look (or move) differently.







proposal

This project explores how gestures becomes a language for navigating absence, presence, and affect in untouchable space through interaction with point cloud models. 



context

I'm interested in natural areas nestled in the middle of cities, and Arroyo Seco Park is a well-known option around the school. It's a historically significant and natural area where people come to walk and relax.

After my online research, I found some interesting angles: people never stop planning, imagining, and using this place. Some things never happened and some are just local stories. This area is also crucial to wildlifes that don't happen within people’s view. But they all relate to the name and place Arroyo—giving me a sense of temporal and spatial overlap. None of this can be described by a linear trajectory or physically touched.

To reflect this layered, nonlinear relationship, I first created a zine that translates these fragments into poetry. I printed the pages on transparent and tracing paper to visualize the sense of overlap. 

At the same time, most interactive systems see the body as a tool for optimizing control. But in life experience, our gestures are often messy, expressive, and full of emotion. Therefore, I thought of recreating in digital space some of the corners of the park where traces of nature and humanity appear to mingle, and using our gestures to explore our relationship with digital space. Through gestures, we try to intervene in the digital world, but a screen forever separates us—like reaching for something present yet untouchable, known yet never grasped.

I trying to build a system where the zine and gesture-based interactions form a conceptual whole—where one medium captures the layering of time and narrative, and the other explores the body's intuitive, emotional relationship to space. Together, they invite the audience to enter this landscape to activate all the presence. 


zine design overview











interactive design process


01 3D scanning on site
  • I visited the park and scanned some models, I looked for the spot where I can see the traces from both human and nature.



  • 02 import models and connect with MediaPipe


  • First, I downloaded the MediaPipe package along with the hand tracking .tox files and imported them into TouchDesigner.

  • I established a basic gesture recognition network and confirmed successful data input from the system.

  • I created a base component named interface to visualize my hand gesture.

  • After that, I imported my ply model into TouchDesigner and set up the initial connections for rendering.

  • This marked the first step of gesture-based interaction—controlling the model using hand gestures.

  • My initial control logic is as follows:
    • The distance between the thumb and index finger of the first hand controls the scaling of the model.
    • The distance between the thumb and index finger of the second hand controls the rotation of the model.




















  • Reference: Torin Blankensmith





  • 03 think about point cloud






    • In order to be able to develop more applications for model-based interaction in touchdesigner, I decided to use a point cloud model.

    • My final goal was to control the density of the particles in the point cloud model with the sentence distance between the thumb and forefinger of the first hand, and to control the scaling of the model with the sentence distance between the thumb and forefinger of the second hand, and the scaling was accompanied by vibrations of the particles.

    • I entered interface base as my MediaPipe loop is the first layer of connection:

    • base1 was created and used as the main point cloud processing container.

    • Used hand_data and a series of math + select nodes to map gesture data to model parameter controls.

    • geo1, geo2 , render1, cam2, light1 render and output the current point cloud scene.

    • At this point, gesture has been successfully connected to the point cloud control network, but the vibration and density control logic has not yet been introduced.

    • I entered base1 as my thrid layer:

    • Import the scanned ply point cloud model using pointfilein.

    • Set the initial pose of the model using pointTransform1.

    • Extract and smooth gesture distance data using math4, select2, filter2 for controlling subsequent parameters.

    • The first hand data is connected to: 

    • point delete/set: used to control the number of points displayed, indirectly controlling the “density”.

    • noise1 → add1: superimposed vibration effect, the amplitude of the noise is controlled by the gesture.

    • The left hand data is connected to: 

    • scale input of transform1 to scale the overall model. The left hand data is connected to: transform1's scale input, which realizes the overall model scaling.

















    focusing problem




    This project questions how interfaces might engage with the affective and spatial qualities of movement, instead of reducing the body to data.

    It addresses three main issues:

    Reduction of Gesture to Function — Most systems treat gestures as commands (e.g., pinch to zoom), ignoring their emotional and expressive potential. This project explores gestures as carriers of feeling and memory.

    Disconnection Between Body and Digital Space — Interactions often remain abstract and impersonal, mediated by screens or sensors. This work asks: What does it mean to reach into a space you cannot touch?

    Lack of Spatialized Emotional Feedback — Few systems let users feel their presence in a digital environment. This project uses hand gestures to manipulate a point cloud model, allowing users to “shape” memory-like landscapes through embodied, sensory interaction.


    research questions for future:

  • What are we missing when we reduce gesture to input?

  • Can an interface understand the emotional weight of a gesture?

  • Can movement itself become a language for navigating memory and space?



  • precedents


    Torin BlankensmithWhen I was learning smoothing and clamp, I initially set the values according to the video, and I found that when I set the values very low, the hand shaking became very noticeable; but when I set them too high, the system would become less responsive. This made me realize that the system wasn't “recognizing movement” but “following the rhythm of intent” - I started to think about what kind of latency and responsiveness would really fit the mood of my work. I began to think about what kind of data latency and responsiveness would really fit the mood of my work. In the end, I decided to keep it at a more sensitive value to convey to the viewer that our gestures are very susceptible to the data environment, as I wanted this subtle connection to be felt as an immediate, vulnerable interplay between the body and the system.

    Inspired by the “gesture triggering” in the tutorial, I followed it and added my own twist: I changed the trigger from just a color change to triggering a change in the density of the particle system, while letting the transparency slowly fade. I think this is more dynamic and allows the audience to experience more visual changes than a single response. In this process I realized that a “gesture” is not just a signal, but a process that can be used to create a dynamic feedback rhythm. This is in conjunction with my concept that the particles in the point cloud model symbolize fragmented information, and that our gestural interventions can harvest visual feedback.


    Zhao, Shichao. (2019). Exploring How Interactive Technology Enhances Gesture-Based Expression and Engagement: A Design Study. This study explores how interactive technologies can support gesture-based emotional expression, rather than treating gestures as neutral control signals. It identifies that expressive movement—when supported by appropriate feedback—can enhance users’ sense of connection, embodiment, and engagement.

    What I found valuable was the idea that gestures can carry meaning beyond functionality, and that emotional depth can emerge when interaction systems support open-ended, bodily expression. This aligns directly with my own project, which uses hand gestures not to command, but to sculpt and distort a point cloud space—a metaphor for memory and presence.


    Lev Poretski, Ofer Arazy, Joel Lanir, Shalev Shahar, and Oded Nov. 2019. Virtual Objects in the Physical World: Relatedness and Psychological Ownership in Augmented Reality. This paper investigates how users form emotional and psychological connections with virtual objects in AR environments. It reveals that interactivity and embodied engagement are crucial for users to feel “ownership” of intangible objects, especially when digital content is layered onto physical space.

    This strongly informed my thinking about point cloud models. Even though they are abstract and untouchable, my use of gesture interaction—and the variability it enables—can help users feel that their presence has weight, and that memory is something they are actively shaping.


    Cloud Studies – by Forensic Architecture












    Although this is a political investigative project, it uses point clouds, spatial reconstruction and non-linear temporal structures to visualize invisible air and poisonous gases - also understood as invisible “traces of existence”.


    future possibilities

    01 Healing Space / Mind-Body Reconstruction Center
    direction: Trauma healing, Alzheimer's assistance, or end-of-life care environment

    02 Public interactive exhibition
    direction: Historical perception reconstruction in urban renewal project, or ecological protection areas, archaeological sites, digital museums, etc.
    03 Remote Affective Communication
    direction: Long-distance family ties, digital relics, virtual nostalgic space

    ©yudi zhang