SvenSchultze

Hi, I’m Sven Schultze, a PhD researcher at TU Darmstadt designing human-centered AI systems.

I explore how people and AI agents can understand each other better: interfaces that explain what models see, multimodal assistants that act across devices, and infrastructure for the emerging agentic web where websites expose clear affordances to LLM-based agents.

What I’m working on

  • Agent-ready web design. VOIX lets developers describe what actions an agent can take on their site without brittle scraping. I test it with builders during multi-day hackathons and iterate with their prototypes.
  • Explainable, multimodal interaction. From interactive aesthetics explainers to conversational robots in enterprise contexts, I build tools that keep people in the loop and make AI behavior legible.
  • Pragmatic tooling. Open-source libraries like SymphonAI, tidy-env, and LiDAR localization experiments grow out of the day-to-day needs of researchers and practitioners I collaborate with.
View projects
Browse publications

Background

Early on I was the on-call “family IT department,” which taught me how easily tech can alienate people. That experience still shapes my research: I look for ways to align AI systems with human expectations, expose what they can do, and keep people in control.

Recently I have:

  • built explainable image-aesthetics tools so photographers can interrogate model judgments,
  • collaborated with industry partners on deploying multimodal LLM assistants on humanoid robots,
  • prototyped VOIX to let web developers declare agent affordances instead of forcing brittle screen scraping,
  • created SymphonAI and tidy-env so other researchers can spin up multi-agent pipelines or interactive simulations quickly.

Outside the lab I design worlds as a Dungeons & Dragons dungeon master and produce music — both inform how I think about interaction, storytelling, and systems design.