How Earthquake Locations Are Estimated

By Ben Williams · · 8 min read

How Earthquake Locations Are Estimated

When an earthquake strikes, one of the first questions people ask is "where was it?" Within minutes, seismological agencies publish a location: a latitude, longitude, and depth. These numbers appear precise, sometimes down to fractions of a degree, but behind that apparent precision lies a complex estimation process that relies on physics, mathematics, and a network of instruments spread across the planet.

Understanding how earthquake locations are determined helps explain why initial locations sometimes shift, why offshore events are harder to pin down, and why two agencies can report slightly different locations for the same earthquake.

P Waves and S Waves: The Raw Data

Every earthquake generates several types of seismic waves. The two most important for location purposes are P waves (primary or compressional waves) and S waves (secondary or shear waves). These waves differ in a critical way: P waves travel faster than S waves through the same rock.

  • P waves travel at roughly 6-8 km/s through the Earth's crust, depending on rock type. They are the first signal to arrive at a seismometer.
  • S waves travel at roughly 3.5-4.5 km/s through the crust. They arrive after the P wave, and the delay between the two increases with distance from the earthquake.

This speed difference is the foundation of earthquake location. By measuring the time gap between the P-wave and S-wave arrivals at a single station, a seismologist can estimate how far away the earthquake occurred. The longer the gap, the greater the distance. This is one of the fundamental principles of earthquake measurement.

From Distance to Location: Triangulation

A single station can determine distance but not direction. The earthquake could lie anywhere on a circle (or more precisely, a sphere) centered on that station at the estimated distance. To narrow down the location, data from multiple stations are combined.

With three or more stations, each contributing a distance estimate, the circles intersect at a point (or a small area). This geometric approach is often called triangulation, though the more technically accurate term is trilateration, since it uses distances rather than angles. In practice, seismologists use data from as many stations as possible to over-determine the solution and reduce uncertainty.

The calculation is not as simple as drawing circles on a map, however. Seismic waves do not travel in straight lines. They refract as they pass through rock layers of different density and composition, curving downward into the Earth and back up to the surface. Accounting for these curved paths requires a velocity model: a mathematical description of how seismic wave speeds vary with depth and, in more advanced models, with horizontal position.

Seismic Networks and Station Coverage

The quality of an earthquake location depends heavily on the geometry of the seismic stations that record it. Several factors matter:

  • Number of stations: More stations provide more arrival-time measurements, which helps average out individual measurement errors.
  • Azimuthal coverage: Stations should surround the earthquake. If all stations lie to one side, the location estimate will be poorly constrained in the direction away from the stations.
  • Distance range: Having stations at a range of distances helps constrain the depth of the earthquake. Stations very close to the epicenter are particularly valuable for depth determination.
  • Station density: Closely spaced stations can detect smaller events and provide more precise locations.

Global networks such as the Global Seismographic Network (GSN) and the International Monitoring System (IMS) provide baseline coverage worldwide. Regional and national networks add density in populated and seismically active areas. Countries like Japan, the United States, and Italy operate some of the densest seismic networks on Earth, capable of locating events down to magnitude 0 or below within their borders.

The Location Algorithm

Modern earthquake location is performed computationally using iterative least-squares algorithms or, increasingly, probabilistic methods. The general process works as follows:

  • Step 1: An initial guess for the earthquake location (latitude, longitude, depth, and origin time) is made, often based on the station that recorded the first arrival.
  • Step 2: Using the velocity model, the algorithm calculates what the arrival times at each station should be for that guessed location.
  • Step 3: The calculated arrival times are compared to the observed arrival times. The differences (residuals) indicate how wrong the guess is.
  • Step 4: The location is adjusted to minimize the residuals, and steps 2-3 are repeated until the solution converges.

The final location comes with an uncertainty estimate, often expressed as an error ellipse on the map and a depth uncertainty. These uncertainties are routinely published in earthquake catalogs but are often overlooked by the public and the media.

Why Locations Change

Initial earthquake locations are published within minutes of an event, sometimes within seconds for regions with real-time processing. These rapid locations serve emergency response needs but are based on limited data. Locations typically get revised for several reasons:

  • More stations report in: Distant stations may take several minutes to record the earthquake. As more arrival times become available, the location improves. This is similar to how magnitudes get revised as more data arrive.
  • Analysts review the data: Automated picks of P and S arrival times are fast but imperfect. Human analysts review and correct these picks, often improving the location significantly.
  • Better velocity models are applied: Initial locations may use a simple, global velocity model. Revised locations can incorporate regional models that better represent local geology.
  • Waveform-based methods are applied: Advanced techniques that use the full waveform rather than just arrival times can produce more precise locations but require more computation time.

For significant earthquakes, the location may be updated several times over days or weeks. Final catalog locations, published months later, represent the best estimate achievable with all available data. Agencies involved in rapid reporting must balance speed against accuracy in these initial estimates.

Offshore Challenges

Locating earthquakes beneath the ocean presents unique difficulties. The fundamental problem is simple: there are almost no seismometers on the ocean floor. Seismic networks are concentrated on land, which means offshore earthquakes are recorded by stations that all lie on one side of the event.

This one-sided geometry degrades location accuracy in predictable ways:

  • Epicentral location: The position on the map may be uncertain by 10-20 km or more for events far from shore, compared to 1-2 km for well-recorded onshore events.
  • Depth: Depth is the hardest parameter to constrain for offshore events. Without nearby stations, the trade-off between depth and origin time becomes severe, and reported depths may default to a fixed value (commonly 10 km or 33 km) rather than a well-determined estimate.
  • Magnitude: Fewer recording stations also affect magnitude estimation, since magnitudes are calculated from amplitude measurements at multiple stations.

Ocean-bottom seismometers (OBS) can be deployed to improve coverage, but they are expensive to install and maintain, and data retrieval typically requires a ship visit. Some newer OBS systems transmit data via acoustic modem to a surface buoy, enabling near-real-time recording, but coverage remains sparse compared to land networks.

Subduction Zone Complications

Many of the world's largest earthquakes occur at subduction zones, where oceanic plates dive beneath continental plates. These zones present additional location challenges because the subducting slab creates strong lateral variations in seismic wave speed. A wave traveling down through the cold, fast slab arrives earlier than a wave traveling through the warmer surrounding mantle. If the velocity model does not account for the slab, the earthquake location will be systematically biased.

Modern three-dimensional velocity models have reduced these biases substantially, but they have not eliminated them. For the most precise locations in subduction zones, seismologists use local network data combined with slab-specific velocity models.

Relative vs. Absolute Locations

A powerful technique for improving earthquake locations involves abandoning the attempt to determine each event's absolute position and instead focusing on the relative positions of nearby events. Methods such as double-difference relocation compare the arrival times of pairs of earthquakes at the same station. Because both events travel through nearly the same path to the station, most of the velocity-model error cancels out.

The result is a set of locations with very precise relative positions, often accurate to tens of meters, even though the absolute position of the cluster may still be uncertain by several kilometers. These relocated catalogs have revealed remarkable details in fault structure, including individual fault strands, triggered secondary faults, and the three-dimensional geometry of aftershock zones.

The Limits of Precision

No earthquake location is exact. Every reported position is an estimate with an associated uncertainty. For well-recorded onshore events in regions with dense networks and good velocity models, the uncertainty may be less than a kilometer horizontally and a few kilometers in depth. For poorly recorded offshore events or deep earthquakes far from any network, the uncertainty may exceed 20 km.

Recognizing these limits is important for anyone interpreting earthquake data. A reported epicenter is not a pinpoint on a map but a probability distribution, and the depth value, unless specifically flagged as well-constrained, should be treated with particular caution. Good seismic catalogs include uncertainty estimates alongside every location, and consulting those estimates is essential for any serious analysis.

Related Articles