When we started building Navionyx, our GPS tracking platform, everything looked simple.
A few vehicles, a few thousand location points, and basic tracking dashboards.

Fast forward a few months โ€” we were dealing with:

  • 100+ million GPS records

  • GBs of time-series data

  • Thousands of devices sending data continuously

  • A real-time dashboard that users expected to load instantly

This is the story of how we redesigned our database, architecture, and query strategy to scale reliably โ€” without overengineering or throwing unnecessary infrastructure at the problem.


๐Ÿ“Œ The Real Challenge of GPS Systems

GPS tracking systems are deceptively hard to scale.

They are:

  • Write-heavy โ†’ constant inserts from devices

  • Read-heavy โ†’ dashboards polling frequently

  • Time-series driven โ†’ data grows forever

At scale, naรฏve designs lead to:

  • Slow ORDER BY queries

  • Full table scans

  • EC2 CPU spikes

  • Dashboards taking seconds to load

We had to rethink how GPS data should be stored and queried.


High-level GPS tracking system architecture showing vehicles sending location data to an AWS EC2 backend, PostgreSQL time-series database, and a real-time fleet dashboard.


๐Ÿง  Designing GPS Data as Time-Series (Not Relational)

One of the first decisions we made was treating GPS data as append-only time-series data.

Key principles:

  • No updates, only inserts

  • Optimize for โ€œlatest point per vehicleโ€

  • Avoid unnecessary joins

Simplified schema:

vehicle_tracking_data (
id BIGSERIAL,
device_id TEXT,
gps_timestamp TIMESTAMP,
latitude DOUBLE PRECISION,
longitude DOUBLE PRECISION,
speed INTEGER,
ignition BOOLEAN
)

This allowed PostgreSQL to do what it does best โ€” fast writes and indexed reads.


โšก PostgreSQL: The Unsung Hero

PostgreSQL handled our scale surprisingly well โ€” once we used it properly.

Key features we relied on:

  • Composite indexes

  • Descending indexes

  • Table partitioning

  • Materialized views

  • Aggressive query optimization

One of the most impactful indexes we added:

CREATE INDEX vtd_device_ts_idx
ON vehicle_tracking_data (device_id, gps_timestamp DESC);

This alone reduced our latest GPS fetch time from seconds to milliseconds.


postgresql elephant mascot


๐Ÿงฉ Partitioning: Making 100M Rows Feel Manageable

Instead of one massive table, we partitioned GPS data by date.

Why date-based partitioning works well for GPS:

  • Queries usually target recent data

  • Old data is rarely accessed

  • Automatic partition pruning reduces scans

PARTITION BY RANGE (gps_timestamp)

Results:

  • Smaller index sizes per partition

  • Faster deletes and archiving

  • Predictable performance even as data grows


postgresql time based partitions


๐Ÿ“Š The Dashboard Bottleneck

Our fleet dashboard shows:

  • Total vehicles

  • Running

  • Stopped

  • Idle

  • Offline

Originally, each refresh recalculated this from raw GPS data.

That meant:

  • Repeating expensive queries

  • CPU-heavy aggregations

  • Slower response times during peak load

We needed a smarter approach.


๐Ÿ”ฅ Materialized Views: The Game Changer

Instead of calculating vehicle states repeatedly, we pre-computed them.

We introduced a materialized view that:

  • Stores the latest GPS point per vehicle

  • Derives vehicle status

  • Refreshes periodically

CREATE MATERIALIZED VIEW vehicle_live_status AS
SELECT DISTINCT ON (device_id)
device_id,
gps_timestamp,
speed,
ignition,
CASE
WHEN gps_timestamp < now() - interval '10 minutes' THEN 'offline'
WHEN speed > 5 THEN 'running'
WHEN ignition = true THEN 'idle'
ELSE 'stopped'
END AS status
FROM vehicle_tracking_data
ORDER BY device_id, gps_timestamp DESC;

Now dashboard queries became trivial:

SELECT status, COUNT(*)
FROM vehicle_live_status
GROUP BY status;

โšก Instant results. Minimal load.


Navionyx app live screenshot


โ˜๏ธ AWS EC2: Scaling Without Overengineering

We ran PostgreSQL on dedicated EC2 instances, tuned for database workloads.

Key decisions:

  • High-RAM EC2 instance

  • SSD-backed storage

  • Proper PostgreSQL memory tuning

  • Connection pooling (PgBouncer)

Instead of blindly scaling horizontally, we:

  • Optimized queries first

  • Reduced unnecessary load

  • Scaled only when metrics justified it


AWS EC2 cloud architecture diagram

 


๐Ÿ“ฆ Handling GBs of GPS Data Efficiently

Why this architecture works:

  • Append-only writes keep inserts fast

  • Indexes optimize the most common access patterns

  • Partitioning keeps queries scoped

  • Materialized views eliminate runtime computation

PostgreSQL proved it can comfortably handle:

  • Hundreds of millions of rows

  • Continuous writes

  • Real-time dashboards


๐Ÿง  Key Takeaways

If youโ€™re building a GPS or telemetry system:

  • Model data for access patterns, not just schema

  • Index for โ€œlatest recordโ€ queries

  • Use materialized views for dashboards

  • Partition early โ€” not after performance breaks

  • PostgreSQL is far more powerful than people assume

With the right design, you donโ€™t need exotic databases โ€” just well-used fundamentals.


High-performance GPS tracking system