Location: Remote (Global) • Reports to: Head of Engineering • Type: Full-time • Created: April 2026


🚀 About Plurio

Plurio is an AI agent for performance marketing teams. We're not another dashboard or analytics tool — we're building AI that actually does the work: analyzes campaigns, spots problems, reallocates budgets, and explains what's happening in plain language.

We've raised $4.5M total, with the most recent $3.5M announced in March 2026. We work with marketing teams spending $100K–$5M/month on paid ads (Meta, Google, TikTok) — companies with complex funnels, long conversion chains, subscription and repeat-purchase models, and LTV-driven optimization (think EdTech, FinTech, Consumer Apps, Healthcare, Home Services).

The team is 30+ people, fully remote, and radically AI-first — everyone works in Cursor with a shared workspace. We don't just sell AI; we run on it.


🚀 Why This Role Exists

Plurio’s attribution engine is a core part of the product. It processes complex customer journeys and determines how marketing spend translates into real business outcomes.

It already works in production and supports real clients, but it has grown into a system with complex logic, heavy queries, and non-trivial performance trade-offs. As we scale, these challenges become more visible and more important to solve.

We are looking for someone to take ownership of how this engine evolves. This includes improving performance, simplifying where possible, and making the system easier to reason about, while preserving the flexibility our clients rely on.

This is a hands-on role at the intersection of data modeling, SQL performance, and real-world business logic. You will be working on a system where changes are not isolated, and every decision has real impact.


🎯 What You'll Own

Attribution engine evolution Lead the technical side of rebuilding the attribution algorithm and related logic while keeping client-facing flexibility — schema, batch logic, performance, and safe rollout.

SQL Server as a product Own T-SQL, schema, and indexing for analytical and transactional workloads; read query plans, find regressions, and tune with measurable before/after.

Performance and safe change Lead performance tuning (indexes, statistics, partitioning where it fits), capacity thinking, and careful rollout of DB changes in live systems — using SQL Profiler, actual execution plans (including dynamic SQL), and columnstore where they apply.

ETL and data paths Improve ETL and data pipelines that feed downstream analytics — including data consumed by AI-driven reporting and agent workflows — without making this an “AI product engineer” job title.

Collaboration with engineering Partner on data model and service boundaries: migrations, backward compatibility, clear contracts between app and database.

C# at mid level Ship and review C# in ASP.NET Core (or similar) — async paths, data access through any ORM you have used in production, tests — where the work sits between API and database. Deep staff-level C# architecture is not the primary expectation.