AI Evaluation & Automation Engineer

REAL

REAL

Software Engineering, Data Science

Tel Aviv District, Israel · Tel Aviv-Yafo, Israel

Posted on Apr 29, 2026

REAL is building an AI Execution Platform for real estate organizations.

Today, the data required to run real estate is scattered across fragmented systems, leading to missed insights and preventable financial leakage.

REAL transforms this complexity into connected intelligence and automated execution, enabling enterprises to operate with greater precision and confidence.

REAL Values

Ownership: We take responsibility and move decisively.

Clarity: We simplify complexity to deliver meaningful impact.

Accuracy: Precision matters in everything we build.

Velocity: We work with urgency and intent.

Partnership: We collaborate closely with customers and teammates.

Role Overview

Own the systems that define, measure, and enforce AI quality at REAL.
Translate ambiguous model behavior into measurable signals, automated tests, and release gates.
Operate across evaluation design, tooling, and production integration.

What You’ll Do

  • Design evaluation architectures (benchmarks, regression suites, coverage)
  • Build automated pipelines to run and score evals across models and prompts
  • Implement scoring systems (LLM-as-judge, rubrics, hybrid approaches)
  • Create and maintain golden datasets + edge-case suites
  • Develop internal tools for prompt testing, dataset generation, experiment tracking
  • Instrument systems for traces, outputs, and debugging
  • Detect regressions and enforce quality gates in CI/CD
  • Monitor model performance in production
  • Close the loop between eval insights and product improvements

What We’re Looking For

  • 3–6 years in QA automation, dev tooling, or backend engineering
  • Strong Python, production-level systems experience
  • Built testing frameworks or validation systems end-to-end
  • Hands-on with LLMs / RAG / agent workflows
  • Understands eval methods (benchmarking, A/B, LLM-as-judge, HITL)
  • Experience with observability / logging / experiment tracking
  • Strong systems thinking (coverage, reliability, reproducibility)
  • Comfort with non-deterministic systems

Nice to Have

  • Experience with eval tools (LangSmith, W&B, MLflow, custom stacks)
  • CI/CD integration for model evaluation
  • Background in search, retrieval, or document systems
  • Built internal platforms or developer tools
  • Experience working in startups and business driven environments