luvv to helpDiscover the Best Free Online Tools
Topic 5 of 31

Date and Time Functions

Learn Date and Time Functions for free with explanations, exercises, and a quick test (for Data Analyst).

Published: December 19, 2025 | Updated: December 19, 2025

Who this is for

  • Data Analysts who need to group metrics by day/week/month, measure durations, and build time-based dashboards.
  • Anyone writing SQL against product, marketing, ops, or finance data with dates and timestamps.

Prerequisites

  • Comfort with SELECT, WHERE, GROUP BY, ORDER BY, and basic aggregates (COUNT, SUM, AVG).
  • Ability to read your database's date/time docs when syntax differs. Examples below show multiple dialects.

Learning path

  1. Understand core date/time concepts and mental model.
  2. Study the worked examples and run them on sample data.
  3. Do the exercises and compare with solutions.
  4. Take the quick test below. Anyone can take it; saved progress is available to logged-in users.

Why this matters

  • Track daily active users, signups, and conversions by day/week/month.
  • Measure lead time: order to ship, ship to delivery, ticket open to close.
  • Create clean time buckets for charts and cohorts.
  • Filter accurate ranges (e.g., last 7 days) without off-by-one errors.

Concept explained simply

Dates and times in SQL let you answer “when” questions. You’ll usually do four things: extract parts (year, month), truncate to buckets (start of day/week), add/subtract intervals (last 7 days), and compute differences (delivery time in hours).

Key building blocks (with cross-dialect notes)
  • Now/current time:
    • PostgreSQL: NOW(), CURRENT_DATE, CURRENT_TIMESTAMP
    • MySQL: NOW(), CURDATE()
    • SQL Server: SYSDATETIME(), GETDATE(), CAST(GETDATE() AS date)
    • BigQuery: CURRENT_DATE(), CURRENT_TIMESTAMP()
    • SQLite: datetime('now'), date('now')
  • Extract parts:
    • PostgreSQL: EXTRACT(YEAR FROM ts), DATE_PART('dow', ts)
    • MySQL: YEAR(ts), MONTH(ts), DAYOFWEEK(ts)
    • SQL Server: DATEPART(year, ts), DATEPART(week, ts)
    • BigQuery: EXTRACT(YEAR FROM ts)
    • SQLite: strftime('%Y', ts)
  • Truncate/bucket:
    • PostgreSQL: DATE_TRUNC('day', ts), DATE_TRUNC('week', ts)
    • MySQL: DATE(ts) (day), or use DATE_FORMAT for formatted; week bucketing often via YEARWEEK(ts)
    • SQL Server: CAST(ts AS date) (day), or build week starts with DATEADD/DATEDIFF
    • BigQuery: DATE_TRUNC(date_col, WEEK), TIMESTAMP_TRUNC(ts, DAY)
    • SQLite: date(ts) (day); weeks via custom calculations
  • Add/subtract intervals:
    • PostgreSQL: ts + INTERVAL '7 days'
    • MySQL: DATE_ADD(ts, INTERVAL 7 DAY)
    • SQL Server: DATEADD(day, 7, ts)
    • BigQuery: TIMESTAMP_ADD(ts, INTERVAL 7 DAY)
    • SQLite: datetime(ts, '+7 days')
  • Differences:
    • PostgreSQL: EXTRACT(EPOCH FROM (t2 - t1)) / 3600 AS hours
    • MySQL: TIMESTAMPDIFF(HOUR, t1, t2)
    • SQL Server: DATEDIFF(hour, t1, t2)
    • BigQuery: TIMESTAMP_DIFF(t2, t1, HOUR)
    • SQLite: (julianday(t2) - julianday(t1)) * 24 AS hours
  • Reliable date ranges: prefer half-open intervals: created_at >= start AND created_at < next_boundary to avoid missing late-night times.

Note: Syntax varies by database; adjust to your system.

Mental model

Imagine every timestamp on a long tape. You either bucket timestamps (snap each to the start of a period), shift the tape (add/subtract intervals), or measure the gap between two marks (differences). Grouping uses bucketed values; filtering uses start–end windows.

Worked examples

1) Daily Active Users (DAU) for the last 7 full days

Assume table: events(user_id, occurred_at TIMESTAMP).

-- PostgreSQL
SELECT DATE_TRUNC('day', occurred_at) AS day_start,
       COUNT(DISTINCT user_id) AS dau
FROM events
WHERE occurred_at >= DATE_TRUNC('day', NOW()) - INTERVAL '7 days'
  AND occurred_at < DATE_TRUNC('day', NOW())
GROUP BY 1
ORDER BY 1;

-- MySQL
SELECT DATE(occurred_at) AS day_start,
       COUNT(DISTINCT user_id) AS dau
FROM events
WHERE occurred_at >= DATE(NOW()) - INTERVAL 7 DAY
  AND occurred_at < DATE(NOW())
GROUP BY 1
ORDER BY 1;

-- SQL Server
WITH today AS (
  SELECT CAST(GETDATE() AS date) AS d
)
SELECT CAST(occurred_at AS date) AS day_start,
       COUNT(DISTINCT user_id) AS dau
FROM events, today
WHERE occurred_at >= today.d AND occurred_at < DATEADD(day, 0, today.d)
  AND occurred_at >= DATEADD(day, -7, today.d)
  AND occurred_at < today.d
GROUP BY CAST(occurred_at AS date)
ORDER BY day_start;

-- BigQuery
SELECT TIMESTAMP_TRUNC(occurred_at, DAY) AS day_start,
       COUNT(DISTINCT user_id) AS dau
FROM `project.dataset.events`
WHERE occurred_at >= TIMESTAMP_TRUNC(CURRENT_TIMESTAMP(), DAY) - INTERVAL 7 DAY
  AND occurred_at < TIMESTAMP_TRUNC(CURRENT_TIMESTAMP(), DAY)
GROUP BY 1
ORDER BY 1;
2) Orders per week with week start date

Assume table: orders(id, created_at TIMESTAMP).

-- PostgreSQL (weeks start Monday by default)
SELECT DATE_TRUNC('week', created_at)::date AS week_start,
       COUNT(*) AS orders
FROM orders
GROUP BY 1
ORDER BY 1;

-- MySQL (week start can vary; mode 3 = Monday start)
SELECT STR_TO_DATE(CONCAT(YEARWEEK(created_at, 3), ' Monday'), '%X%V %W') AS week_start,
       COUNT(*) AS orders
FROM orders
GROUP BY 1
ORDER BY 1;

-- SQL Server (compute Monday week start)
SELECT DATEADD(day, - (DATEPART(weekday, created_at) + 5) % 7, CAST(created_at AS date)) AS week_start,
       COUNT(*) AS orders
FROM orders
GROUP BY DATEADD(day, - (DATEPART(weekday, created_at) + 5) % 7, CAST(created_at AS date))
ORDER BY week_start;

-- BigQuery
SELECT DATE_TRUNC(DATE(created_at), WEEK(MONDAY)) AS week_start,
       COUNT(*) AS orders
FROM `project.dataset.orders`
GROUP BY 1
ORDER BY 1;
3) Average delivery time in hours

Assume table: shipments(order_id, shipped_at TIMESTAMP, delivered_at TIMESTAMP).

-- PostgreSQL
SELECT AVG(EXTRACT(EPOCH FROM (delivered_at - shipped_at)) / 3600.0) AS avg_hours
FROM shipments
WHERE shipped_at IS NOT NULL AND delivered_at IS NOT NULL;

-- MySQL
SELECT AVG(TIMESTAMPDIFF(HOUR, shipped_at, delivered_at)) AS avg_hours
FROM shipments
WHERE shipped_at IS NOT NULL AND delivered_at IS NOT NULL;

-- SQL Server
SELECT AVG(CAST(DATEDIFF(hour, shipped_at, delivered_at) AS float)) AS avg_hours
FROM shipments
WHERE shipped_at IS NOT NULL AND delivered_at IS NOT NULL;

-- BigQuery
SELECT AVG(TIMESTAMP_DIFF(delivered_at, shipped_at, HOUR)) AS avg_hours
FROM `project.dataset.shipments`
WHERE shipped_at IS NOT NULL AND delivered_at IS NOT NULL;

Exercises

Try these on your data (or a sandbox). Then open the solutions to compare. Use the checklist to self-review.

  • Exercise ex1: Orders per month this year.
  • Exercise ex2: Average delivery time (hours) over the last 90 days.
  • Exercise ex3: Events in the last 24 hours, grouped by hour.
  • Checklist:
    • You used truncation/bucketing for GROUP BY (day/week/month) instead of string formatting.
    • Your date filters are half-open (>= start AND < next_boundary) when time is involved.
    • You handled NULL timestamps in difference calculations.
    • You verified week start convention (Monday vs Sunday) matches your stakeholders.

Common mistakes and self-check

  • Using BETWEEN for time ranges and missing late-night records. Prefer created_at >= start AND created_at < end.
  • Mixing DATE with TIMESTAMP in comparisons without casting. Align types before comparing.
  • Assuming week starts on Monday everywhere. Confirm and document.
  • Misreading DATEDIFF direction (SQL Server/MySQL order is start, end). Double-check positive/negative values.
  • Forgetting time zones. If data is UTC, convert only at presentation or when a specific local timezone is required.

Practical projects

  • Build a 13-month trend of monthly revenue with month-over-month growth.
  • Create a weekly cohort table: week of first purchase vs repeat purchases in the following 8 weeks.
  • Operations SLA dashboard: average, p90, p95 resolution time for support tickets by week.

Mini challenge

Given sessions(user_id, session_start TIMESTAMP, session_end TIMESTAMP), produce an hourly chart for the last 48 hours showing concurrent sessions (a session counts for every hour it overlaps). Hint: bucket both start and end to hours and expand via date generation or use overlaps logic if your warehouse supports it.

Next steps

  • Learn window functions to compute rolling averages and period-over-period changes.
  • Study time zone conversions and daylight-saving caveats.
  • Practice building robust date dimensions for consistent reporting.

When you feel ready, take the quick test below. Everyone can take it; saved progress is available to logged-in users.

Practice Exercises

3 exercises to complete

Instructions

Table: orders(id, created_at TIMESTAMP). Return two columns: month_start (DATE) and total_orders for the current calendar year, ordered by month_start.

Use proper month bucketing and filter only this year.

Expected Output
month_start | total_orders 2025-01-01 | 842 2025-02-01 | 799 2025-03-01 | 901 ...

Date and Time Functions — Quick Test

Test your knowledge with 8 questions. Pass with 70% or higher.

8 questions70% to pass

Have questions about Date and Time Functions?

AI Assistant

Ask questions about this tool