FAQ
Quick reference for the terms, distinctions, and edge cases that come up while building against the API.
How does the product work, at a glance?
Aura Vision deploys cameras in physical retail locations. The cameras stream video to our analytics pipeline, which derives visitor metrics — entries, dwell, occupancy, demographics, movement between zones — and stores them. The API exposes those metrics, plus the configuration objects (locations, recordings, zones) that produced them.
There is no concept of “raw video” in the API. By the time data reaches you, it is already aggregated counts and rates with no identifying information.
What’s the difference between an organisation, a location, and a recording?
The resource hierarchy is organisation → location → recording.
- An organisation is your tenant. You authenticate as an organisation; everything you query is scoped to it.
- A location is a physical site — a single store, branch, venue, or building. Locations belong to one organisation. Most metric routes default to operating at the location level.
- A recording is a single camera feed inside a location. A large store may have many recordings (entrance, fitting-rooms, checkouts, etc.). Most metrics aggregate across all recordings in a location; you only address individual recordings for detail data like thumbnails or per-recording uptime.
Each level has a 24-character hex ID. IDs flow downwards: org → multiple locations → multiple recordings.
What’s the difference between Traffic, Lines, and Areas?
All three count people, but they count them differently:
- Traffic (Entries, Passers-by, Capture rate) counts people relative to the location as a whole — entering the building, walking past the front, the ratio between the two. Driven by store-entry lines configured on each recording.
- Lines (Line entries, Line passers-by, Line capture rate, Line entries per visitor, Location movement) counts crossings of named gates inside a location — entries into specific zones, movement between zones. Driven by within-location lines.
- Areas (Area entries, Area dwell metrics, Area occupancy, Area utilisation, etc.) operate on 2D regions drawn inside a location — how many people are inside a zone, how long they spent there.
In short: Traffic is the building, Lines are thresholds inside it, Areas are regions inside it.
What’s the difference between a line and an area?
A line is a 1D threshold — a virtual gate drawn across a doorway, an aisle entrance, or a transition point. The metric question is “did they cross it?”
An area is a 2D region — a polygon covering part of the camera frame. The metric question is “are they inside it, and for how long?”
Lines drive entry-style metrics (people counted at the moment they cross). Areas drive time-based metrics (dwell, occupancy, utilisation).
What’s the difference between dwell and occupancy?
Both describe presence in an area, but they measure different things:
- Dwell time is time per visit — how long each visitor spent in the zone.
area_total_dwell_timeis the sum across all visits in the period;area_average_dwell_timeis total dwell ÷ number of visits. Units: seconds. - Occupancy is count at an instant — how many people are inside right now.
area_average_occupancyis the mean across the period;area_max_occupancyis the peak. Units: people.
A zone can have high dwell but low occupancy (long, sparse visits) or vice versa (short, busy visits).
What’s a taxonomy?
A taxonomy is a zone-path string that identifies a configured line or area inside a location — for example Service:Checkout, Products:Clothing:Shoes, Service:Fitting Rooms. They appear as body parameters across the Metrics API wherever a query needs to point at a specific zone (e.g. taxonomy: ["Service:Checkout"] on Line entries or Area entries).
Where do taxonomies come from? Can I create them?
Taxonomies are configured per recording when lines and areas are drawn on a camera’s frame — typically in the Aura Vision platform UI during recording setup. Each line or area gets a taxonomy string. The same string can repeat across recordings: an organisation-wide query for Service:Checkout aggregates every checkout zone across every location.
You can’t create or rename taxonomies through this API — that’s a configuration concern handled by your Aura Vision account team. What you can do is discover which taxonomies exist for your locations:
- Area taxonomies — query Area taxonomy (
detail/listwithreturnEntityType: "area_context"). Each record carriesarea_type,taxonomy,recording_id. - Line taxonomies — query Line taxonomy (
detail/listwithreturnEntityType: "line_context"). Each record carriesline_type,taxonomy,direction.
Taxonomies are colon-delimited paths. No enforced schema — Service:*, Products:* etc. are conventions, not rules. Hierarchy is purely semantic: Products:Clothing and Products:Clothing:Shoes are independent strings; the API doesn’t roll one up into the other.
What line types exist?
Every line carries a line_type:
location-entry— a store-entry line (the front door). Counted by Traffic metrics.location-pass-by— a store-frontage line. Counted by Traffic > Passers-by.within-location-entry— a one-sided threshold into a zone. Counted by Line entries.within-location-movement— a directional gate between two zones. Counted by Location movement.within-location-pass-by— a within-store pass-by line.
What area types exist?
area_type is "taxonomy" (a named zone configured on a recording) or "location-dwell" (a pseudo-area covering the whole location, used for store-wide dwell calculations).
What should I put in entities and entityType?
Most metric routes accept entityType: "location" | "organisation". The entities array carries IDs matching that type:
entityType: "location"— aggregate across the listed location IDsentityType: "organisation"— aggregate across the whole organisation (one or more locations)
Detail data (detail/list) and uptime routes also accept entityType: "recording" to address individual cameras.
Which aggregationPeriod should I use?
aggregationPeriod controls how time is bucketed in the response:
- Calendar buckets —
15min,hour,day,week_iso,week_us,month,quarter,year. Use when you want a time-series. - Collapsed buckets —
hourofday(all data → 24 buckets),dayofweek(→ 7),dayofweek-hourofday(→ 168 grid). Use for behavioural questions like “when do people visit on average?”
What do facets do?
facets controls what the response carries — request only what you need:
segments— time-series rows (one per bucket)summary— totals/averages across the full periodaggregates— day-of-week or hour-of-day rollupsthumbnails— camera thumbnails associated with the periodthumbnails-with-statelems— thumbnails with line/area overlays drawnestimates— imputed values for periods affected by camera downtime (only on Traffic > Entries under strict conditions)
How do I break results down by location, demographics, or zone?
breakdownByDimensions splits the data along one or more axes:
entity— one row per location (or recording)taxonomy— one row per zoneage,gender,role— demographic splits
If you breakdown by age, gender, or role you must also include the matching filter array (ages, genders, roles) listing which values you want.
Combine breakdowns to get a multi-dimensional cube — ["entity", "gender"] returns one row per location per gender.
What demographic values can I filter on?
- Age buckets:
0_15,16_24,25_34,35_44,45_54,55_64,65_100 - Gender:
male,female - Role:
customer,staff,customer-customer,customer-staff,staff-staff(the multi-person values are for proximity/interaction analytics)
These are estimates derived from the analytics pipeline, not from explicit identification.
How can I get real-time data?
Two options depending on what “real-time” means for your use case:
- Live device connectivity — query
status/uptimefor a real-time snapshot of which cameras are online right now. Reflects the latest heartbeat from each recording. - Recent visitor metrics — query any metric endpoint (Entries, Line entries, Area entries, etc.) for the most recent ~10 minutes. Today’s data refreshes roughly every 10 minutes; older periods settle overnight to absorb late frames and minor corrections. See the “Update frequency” section on each metric page for the precise schedule.
How are sales / point-of-sale metrics different from camera metrics?
Sales metrics (Transactions, Volume, Average transaction value, etc.) come from your POS feed, not from cameras. Two consequences:
- No demographics. POS records don’t carry visitor attributes. Sales metrics cannot be broken down by age, gender, or role. Always include
entityinbreakdownByDimensionsand nothing else. - Joined at the location level. Sales tie to a location, not to specific zones — so Conversion rate is transactions ÷ location entrants, never zone entrants.
What’s a campaign?
A campaign is a test-vs-control measurement structure managed via the Core API. Define a test period, an optional control period, a set of test/control locations, a metric to evaluate, and an expected uplift. The platform then evaluates whether the metric moved in the test group beyond what the control group accounts for.
Campaigns carry taxonomies and lineTaxonomies lists — the zones the metric applies to — and selectedBreakdowns for demographic splits.
A response has null in it — is that the same as 0?
No, and don’t coerce one to the other:
null— no data exists for that period (outside opening hours, or a rate is undefined because the denominator was zero).0— data was collected and the count was zero.
Losing the distinction breaks averages and trend analysis.
What do heatmap and thumbnail routes return?
Images, not numbers:
- Heatmaps — coloured overlays on the camera frame showing where people walked, lingered, entered, or exited.
- Thumbnails — recent camera frames, useful for visualisation or admin UIs.
- Thumbnails with overlays — thumbnails with the configured line/area coordinates drawn on top. Useful for debugging zone configuration.
All three return signed S3 URLs you fetch with a plain GET.
Where to go next
- Quickstart — four objective-driven examples
- Core API overview — REST endpoints
- Metrics API overview — every metric route grouped by topic