A view of Edinburgh from 6th floor, Appleton Tower

Dezhi Luo

Academic

  1. Interests
  2. Recent/Ongoing Works
    1. In machine learning
    2. In theories and philosophy of cognitive science
    3. In psychophysics
  3. Trainings
  4. Other involvements
  5. Resources
  6. Contact

Interests

I’m broadly interested in how the minds work, as well as how it helps us understand the opportunities and risks posed by advanced AI systems. I focus mainly on consciousness, agency, and understanding, with occasional intersections in emotion, language, and sexuality.

I primarily work on theory (philosophical/formal/computational), but am more than happy to collaborate with experimentalists (see below!).

I’ve had the privilege to learn directly from people whose works greatly influenced me. I am currently working on my thesis with Rick Lewis and Chandra Sripada, having worked in the labs led by Steve Fleming and John Jonides. I also owe a great deal to Daniel Rothschild, Laura Ruetsche, Gayle Rubin, and Maegan Fairchild, whose seminars and project feedback shape much of what I am doing today (and will be in the future).


Recent/Ongoing Works

In machine learning

  • “Core Knowledge Deficits in Multi-Modal Language Models” (co-led with Hokin Deng & Yijiang Li)
    (ICML 2025)

    We demonstrated that current foundation models do not ground their reasoning in basic understanding of object, number, action space, and social relations, which is understood as the “developmental startup software” in humans. We hypothesize that this deficit may account for their lack of robustness in real-world scenarios.

    We also conducted focused analyses on how the models’ performance on specific subsections of the benchmark provides insight on particular domains of their functional profiles, including but not limited to theory-of-mind, mechanical reasoning, and perceptual constancy. We have also devised concept hacking, a general method for assessing shortcut-taking behaviors in cognitive-inspired benchmarking.

  • “A Very Big Video Reasoning Suite” (led by Hokin Deng and Zhongang Cai)
    (technical report for the project video-reason; under review)

    A very big suite for video reasoning, with an 1.2M dataset, hundreds of task generators, a new model trained with LoRA on WAN 2.2, and more.

    We think this adds to the idea that video models are the new frontier of visuospatial reasoning and beyond.
    (there is no free lunch in spatial world modeling!)

  • “Can Vision Language Models Infer Human Gaze? (led by Zory Zhang & Pingyuan Feng)
    (under review)

    We ran a very carefully controlled study and found that they can’t. They rely too much on heads, but there’s more than that.

  • Much of the above works are done with the amazing people at GrowAI, an open-source community I co-founded with Hokin Deng, Yijiang Li, and Ziqiao Ma in which we aim to evaluate and develop better multi-modal language models that are capable to learn and think like humans by grounding their reasoning in foundational cognitive mechanisms.

In theories and philosophy of cognitive science

  • The risks of self-having artificial agents”
    (draft presented at the International Conference on Large-scale AI Risks & PAIR-NEU-CASIP Workshop on Agentic AI)

    I show that agents with conceptual self-processing capabilities engender substantial risks, both existential and welfare ones, and argue that most of the building blocks for such capabilities are already in place.

  • “Making good consciousness science”
    (early version presented at MoC 2026)

    I defend the view that we ought to adopt anti-realism for a good consciousness science due to the unique epistemic constraints it faces in distinguishing computational vs. biological theories. I further argue that computational theories have an edge under this consideration due to their explanatory power, and that this implies a value-free ideal regarding ethical questions pertaining to consciousness.

  • “Rethinking the simulation vs. rendering dichotomy” (w/ Hokin Deng & Qingying Gao)
    (presented at SpaVLE @NeurIPS; under review)

    We show that the neuroscientific literature does not support the independence of spatial reasoning from graphic rendering, but rather indicates that they are jointly supported by prefrontal networks. This implies that there are no prima facie reasons to expect AI with coarse-grained visual encodings to model space.
    (incl. a hat tip to video reasoning!)

  • “The philosophical foundations of growing AI like a child” (w/ Hokin Deng & Yijiang Li)
    (presented at the Institute of Philosophy; under review)

    We argue that the nativist vs. empiricist debate with respect to language models is best resolved with the combinations of foundational cognitive structures (“core knowledge”) and increased computational power (“the scaling law”). We then speculate reasons why core knowledge is not there in todays MLLMs (as shown by our empirical work!) and propose a way to fix this.

  • “Perceptual metacognition, memory, and self-consciousness” (w/ Dorian Liu)
    (draft presented at MoC 2025, SPAN 2025, & ASSC 2024)

    We explore the hypothesis that both “pre-reflective” (e.g. sense of ownership) and “reflective” (e.g. introspection) types of self-consciousness rely on a generic perceptual metacognitive process while differentiated by mnemonic contents in terms of their phenomenal characters.

  • “Mind uploading: a techno-philosophical analysis” (w/ Dorian Liu)
    (draft presented at ASSC 2025 and ISPSM 2025)

    We outline the lean technical conditions that would enable someone to survive as themselves after mind uploading, concluding that it takes a strong kind of computational functionalism which, despite its hard restrains, has yet been falsified by empirical evidence.

  • “Hedonic reversals as a case against the emotional unconscious”
    (draft presented at ASSC 2023)

    I examine how hedonic reversal, the phenomenon of taking pleasure from experiences that are usually aversive – such as the fear from watching horror movies or the pain from consuming chili peppers – inform us about the nature of emotion. I argue that it presents an elegant case of support for the ambitious higher-order theories of emotions by showing that the subjective nature of emotional valence must be understood in terms of conceptual self-processing.

In psychophysics

  • “Forced-response evidence of cross-task conflict adaptation” (w/ Sarah Liberatore)
    (work-in-progress presented at OPAM 2024)

    By using the forced-response method and a new confound minimized design, we found evidence of congruency sequence effect (CSE) between Simon and Flanker task. This offers support to the extremely cool but much debated idea that conflict adaptation might be at least to some extent domain-general.

  • “The effect of zero during symbolic and non-symbolic numerical comparison” (supervisor: Benjy Barnett)
    (Affiliate studentship thesis at UCL)

    I proposed a affective-metacognitive theory of absence perception based on the idea that it is an epistemic feeling triggered by source-monitoring and assessed it on a numerical comparison task that includes zero.

Trainings

I’m quite keen on going to summer/winter schools (partly for meeting new people and trying new foods). I’ve been at CreteLing 2023, NYI Winter School 2023, LSSLL 2021 & 22, and NASSLLI 2022. These trainings become less frequent after I discovered conferences.


Other involvements

I am a student committee member of the ASSC, which is notoriously the coolest conference in the world. I am also an associated member of AMCS and SPAN, the conferences of which are strong contenders for that title.

I regularly work for X Academy, a multidisciplinary educational program happening every summer in China, where I’ve been hosting the Cognitive Neuroscience course taught by Prof. Xiao Xiao (2023 & 24 & 25), Prof. Miao Cao (2024 & 25) and Prof. Ji Hu (2022).

In my past life I did linguistics Olympiad. I stay with the community after graduating high school, having volunteered to host the 2023 UKLO national training camp and building a problem database for the organizing committee of CNOL.


Resources

I compiled some reading lists for foundational texts in cognitive science and philosophy & theories of AI, both are primarily for folks with intro-level backgrounds in psychology/philosophy/computer science.


Contact

Whether you are interested in collaborations or just want to chat, please feel free to reach me at ihzedoul [at] umich [dot] edu : D