K Index Calculation — Exploring our Magnetic Earth (2024)

Show code cell contentHide code cell content
# Install pooch (not currently in VRE)import sys!{sys.executable} -m pip install poochimport datetime as dtimport numpy as npimport pandas as pdimport matplotlib.pyplot as pltimport poochfrom viresclient import SwarmRequestimport ipywidgets as widgetsfrom warnings import filterwarningsfilterwarnings(action="ignore")# Data dependencies (pooch caches this in ~/.cache/pooch/)esk_k_ind_file = pooch.retrieve( "https://raw.githubusercontent.com/MagneticEarth/IAGA_SummerSchool2019/master/data/external/k_inds/esk/2003.esk", known_hash="233246e167a212cd1afa33ff2fe130fbc308cd2ae7971c6c2afcd363c9775c18")
Collecting pooch Downloading pooch-1.7.0-py3-none-any.whl (60 kB)?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/60.9 kB ? eta -:--:-- ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.9/60.9 kB 12.1 MB/s eta 0:00:00?25h
Requirement already satisfied: platformdirs>=2.5.0 in /opt/conda/lib/python3.9/site-packages (from pooch) (2.5.0)Requirement already satisfied: requests>=2.19.0 in /opt/conda/lib/python3.9/site-packages (from pooch) (2.27.1)Requirement already satisfied: packaging>=20.0 in /opt/conda/lib/python3.9/site-packages (from pooch) (21.3)Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /opt/conda/lib/python3.9/site-packages (from packaging>=20.0->pooch) (3.0.7)Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.9/site-packages (from requests>=2.19.0->pooch) (1.26.8)Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.9/site-packages (from requests>=2.19.0->pooch) (2021.10.8)Requirement already satisfied: charset-normalizer~=2.0.0 in /opt/conda/lib/python3.9/site-packages (from requests>=2.19.0->pooch) (2.0.12)Requirement already satisfied: idna<4,>=2.5 in /opt/conda/lib/python3.9/site-packages (from requests>=2.19.0->pooch) (3.3)
Installing collected packages: pooch
Successfully installed pooch-1.7.0WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
[notice] A new release of pip is available: 23.0.1 -> 23.1.2[notice] To update, run: pip install --upgrade pip
Downloading data from 'https://raw.githubusercontent.com/MagneticEarth/IAGA_SummerSchool2019/master/data/external/k_inds/esk/2003.esk' to file '/home/jovyan/.cache/pooch/cb259464aff5d2a782ba1ab3ddc07f1d-2003.esk'.

Calculating K-indices for a single observatory#

The K-index is a local geomagnetic activity index devised by Julius Bartels in 1938 to give a simple measure of the degree of geomagnetic disturbance during each 3-hour (UT) interval seen at a single magnetic observatory. Data from the observatory magnetometers are used to assign a number in the range 0-9 to each 3-hour interval, with K=0 indicating very little geomagnetic activity and K=9 representing an extreme geomagnetic storm. The K-index was introduced at the time of photographic recording, when magnetograms recorded variations in the horizontal geomagnetic field elements declination (D) and horizontal intensity (H), and in the vertical intensity (Z).

To derive a K-index an observer would fit, by eye, a ‘solar regular variation’ (\(S_R\)) curve to the records of D and H and measure the range (maximum-minimum) of the deviation of the recording from the curve. The K-index was then assigned according to a conversion table, with the greatest range in D and H ‘winning’. The north component (X) may be used instead of H, and the east component (Y) instead of D (X and Y will be used in the examples below and see http://isgi.unistra.fr/what_are_kindices.php for more details on the K-index). The vertical component Z is not used because it is liable to contamination by local induced currents.

The conversion from range in nanoteslas to index is quasi-logarithmic. The conversion table varies with latitude in an attempt to normalise the K-index distribution for observatories at different latitudes. The table for Eskdalemuir is shown below.

K

1

2

3

4

5

6

7

8

9

Lower bound (nT)

8

15

30

60

105

180

300

500

750

This means that, for instance, K=2 if the measured range is in the interval [15, 30) nT.

There was a long debate in IAGA Division V about algorithms that could adequately reproduce the K-indices that an experienced observer would assign. The algorithms and code approved by IAGA are available at the International Service for Geomagnetic Indices: http://isgi.unistra.fr/softwares.php.

Example#

In the following cells, we illustrate a possible approach. We assume the so-called regular daily variation \(S_R\) is made up of 24h, 12h and 8h signals (and, possibly, higher harmonics). A Fourier analysis can be used to investigate this. The functions in the cell below calculate Fourier coefficients from a data sample of one-minute data values over a 24-hour UT interval, and then synthesise a smooth fit using the Fourier coefficients.

For some days this simple approach to estimating \(S_R\) seems to work well, on others it’s obviously wrong. Think about another approach you might take.

We then attempt to calculate K-indices for the day chosen by computing the Fourier series up the the number of harmonics selected by subtracting the synthetic harmonic signal from the data, then calculating 3-hr ranges and converting these into the corresponding K-index. The functions to do are also included in the following cell.

Show code cell sourceHide code cell source
def fourier(v, nhar): npts = len(v) f = 2.0/npts t = np.linspace(0, npts, npts, endpoint=False)*2*np.pi/npts vmn = np.mean(v) v = v - vmn cofs = [0]*(nhar+1) cofs[0] = (vmn,0) for i in range(1,nhar+1): c, s = np.cos(i*t), np.sin(i*t) cofs[i] = (np.dot(v,c)*f, np.dot(v,s)*f) return (cofs)def fourier_synth(cofs, npts): nt = len(cofs) syn = np.zeros(npts) t = np.linspace(0, npts, npts, endpoint=False)*2*np.pi/npts for n in range(1, nt): for j in range(npts): syn[j] += cofs[n][0]*np.cos(n*t[j]) + cofs[n][1]*np.sin(n*t[j]) return (syn)# Define K-index conversion table for ESKK_conversions = { f"K{level}": level_bin for level, level_bin in enumerate( (0, 8, 15, 30, 60, 105, 180, 300, 500, 750) )}# Define reverse mappingnT_to_K = {v: k for k, v in K_conversions.items()}def K_calc(d, synd, Kb=K_conversions): tmp = np.ptp((d-synd).reshape(8,180), axis=1) return(list(np.digitize(tmp, bins=list(Kb.values()), right=False)-1))def load_official_K(filepath=esk_k_ind_file): df = pd.read_csv(filepath, skiprows=0, header=None, delim_whitespace=True, parse_dates=[[2,1,0]], index_col=0) df = df.drop(3, axis=1) df.index.name='Date' df.columns = ['00','03','06','09','12','15','18','21'] return(df)def load_ESK_2003(): request = SwarmRequest() request.set_collection(f"SW_OPER_AUX_OBSM2_:ESK", verbose=False) request.set_products(measurements=["B_NEC", "IAGA_code"]) data = request.get_between( dt.datetime(2003, 1, 1), dt.datetime(2004, 1, 1), ) df = data.as_dataframe(expand=True).drop( columns=["Spacecraft"] ) df = df.rename(columns={f"B_NEC_{i}": j for i, j in zip("NEC", "XYZ")}) return df

First, load in (X, Y, Z) one-minute data from Eskdalemuir for 2003 into a pandas dataframe.

df_obs = load_ESK_2003()df_obs.head()
Radius Latitude Longitude IAGA_code X Y Z
Timestamp
2003-01-01 00:00:00 6.363967e+06 55.119672 356.8 ESK 17196.514832 -1473.2 46252.152020
2003-01-01 00:01:00 6.363967e+06 55.119672 356.8 ESK 17196.014835 -1473.4 46252.150446
2003-01-01 00:02:00 6.363967e+06 55.119672 356.8 ESK 17196.514832 -1473.4 46252.152020
2003-01-01 00:03:00 6.363967e+06 55.119672 356.8 ESK 17196.715461 -1473.8 46251.952650
2003-01-01 00:04:00 6.363967e+06 55.119672 356.8 ESK 17197.115459 -1473.7 46251.953909

Load the official K index data (available from http://www.geomag.bgs.ac.uk/data_service/data/magnetic_indices/k_indices) to compare with later.

df_K_official = load_official_K()df_K_official.head()
00 03 06 09 12 15 18 21
Date
2003-01-01 1 2 2 1 1 1 2 3
2003-01-02 2 0 0 1 1 1 2 2
2003-01-03 2 0 1 1 3 5 4 4
2003-01-04 4 4 2 2 2 2 3 3
2003-01-05 1 1 1 1 2 2 2 3

Evaluate K indices for a given day:

  • For each of \(X\) and \(Y\):

  • Perform a Fourier analysis on the data to find the regular daily variation, \(S_R\)

  • Over each 3-hour interval, find the maximum differences from \(S_R\)

  • Convert from nT to \(K\) using the conversion table for ESK

  • Pick the greater of \(K(X)\) and \(K(Y)\) and compare with the official K index

Show code cell sourceHide code cell source
def analyse_day(day=dt.date(2003, 1, 1), n_harmonics=3, df=df_obs, df_K_official=df_K_official): """Generate figure illustrating the K index calculation for a given day""" # Select given day _df = df.loc[day.isoformat()] _df_K = df_K_official.loc[day.isoformat()] # Select X & Y data and remove daily mean x = (_df["X"] - _df["X"].mean()).values y = (_df["Y"] - _df["Y"].mean()).values # Perform Fourier analysis of X & Y separately xcofs = fourier(x, n_harmonics) synx = fourier_synth(xcofs, len(x)) ycofs = fourier(y, n_harmonics) syny = fourier_synth(ycofs, len(y)) # Build plot t = np.linspace(0, 1440, 1440, endpoint=False)/60 fig, axes = plt.subplots(2, 1, figsize=(15, 10), sharex=True) # Plot X & Y data with approximated variation axes[0].plot(t, x, color="tab:blue", alpha=0.5) axes[0].plot(t, synx, color="tab:blue", label="X") axes[0].plot(t, y, color="tab:red", alpha=0.5) axes[0].plot(t, syny, color="tab:red", label="Y") # Plot the differences axes[1].plot(t, (x-synx), color="tab:blue") axes[1].plot(t, (y-syny), color="tab:red") # Find and plot min/max bounds over 3-hourly intervals minX = np.min((x-synx).reshape(8, 180), axis=1) maxX = np.max((x-synx).reshape(8, 180), axis=1) minY = np.min((y-syny).reshape(8, 180), axis=1) maxY = np.max((y-syny).reshape(8, 180), axis=1) t_3hours = np.linspace(0, 1440, 9, endpoint=True)/60 axes[1].fill_between(t_3hours, list(minX)+[0], list(maxX)+[0], step="post", color="tab:blue", alpha=0.5) axes[1].fill_between(t_3hours, list(minY)+[0], list(maxY)+[0], step="post", color="tab:red", alpha=0.5) # Determine K index from each of X & Y K_X = np.digitize((maxX-minX), bins=list(K_conversions.values()), right=False) - 1 K_Y = np.digitize((maxY-minY), bins=list(K_conversions.values()), right=False) - 1 for i, (K_X_i, K_Y_i) in enumerate(zip(K_X, K_Y)): # Display determined K from X & Y px = i*3 py = axes[1].get_ylim()[1] axes[1].annotate( f"K(X): {K_X_i}", (px, py), xytext=(30, 18), textcoords="offset pixels", color="tab:blue", size=12, ) axes[1].annotate( f"K(Y): {K_Y_i}", (px, py), xytext=(30, 3), textcoords="offset pixels", color="tab:red", size=12, ) # Display comparison with the official K index K_ours = max(K_X_i, K_Y_i) K_official = _df_K[i] axes[1].annotate( f"{K_ours}\n{K_official}", (i*3, axes[1].get_ylim()[0]), xytext=(40, -70), textcoords="offset pixels" ) axes[1].annotate( f"Determined K:\nOfficial K:", (0, axes[1].get_ylim()[0]), xytext=(-80, -70), textcoords="offset pixels" ) # Finalise figure for ax in axes: ax.grid() ax.xaxis.set_ticks(np.arange(0, 27, 3)) axes[1].set_ylabel("Residuals [nT]") axes[1].set_xlabel("UT [hour]") axes[0].set_ylabel("[nT]") axes[0].legend(loc="upper right") fig.suptitle(f"ESK: {day.isoformat()}", y=0.9) return fig, axesdef make_widgets_K_index_calc(): day = widgets.SelectionSlider( options=[t.date() for t in pd.date_range(dt.date(2003, 1, 1), dt.date(2003, 12, 31))], description="Select day:", layout=widgets.Layout(width='700px') )# day = widgets.DatePicker(value=dt.date(2003, 1, 1), description="Select day:") n_harmonics = widgets.SelectionSlider(options=range(1, 11), value=3, description="# harmonics:") return widgets.VBox( [day, n_harmonics, widgets.interactive_output( analyse_day, {"day": day, "n_harmonics": n_harmonics} )], )make_widgets_K_index_calc()

Statistics of the K index#

We will use the official K index from ESK to probe some statistics through the year 2003.

Histograms of the K indices for each 3-hour period:

axes = df_K_official.hist( figsize=(12, 12), bins=range(11), sharey=True, align="left", rwidth=0.8,)plt.suptitle('ESK 2003: Distribution of K-indices for each 3-hour interval')axes[-1, 0].set_ylabel("Frequency")axes[-1, 0].set_xlabel("K");

K Index Calculation — Exploring our Magnetic Earth (1)

… plotted side by side:

plt.figure(figsize=(7,7))plt.hist(df_K_official.values, bins=range(11), align='left')plt.legend(df_K_official.columns)plt.ylabel('Number of 3-hour intervals')plt.xlabel('K');

K Index Calculation — Exploring our Magnetic Earth (2)

… and stacked together:

plt.figure(figsize=(7,7))plt.hist(df_K_official.values, bins=range(11), stacked=True, align='left', rwidth=0.8)plt.legend(df_K_official.columns)plt.ylabel('Number of 3-hour intervals')plt.xlabel('K');

K Index Calculation — Exploring our Magnetic Earth (3)

We also compute a daily sum of the K-indices for the 2003 file, and list days with high and low summed values. Note that this summation is not really appropriate because the K-index is quasi-logarithmic, however, this is a common simple measure of quiet and disturbed days. (These might be interesting days for you to look at.)

df_K_official['Ksum'] = df_K_official.sum(axis=1)Ksort = df_K_official.sort_values('Ksum')print('Quiet days: \n\n', Ksort.head(10), '\n\n')print('Disturbed days: \n\n', Ksort.tail(10))
Quiet days: 00 03 06 09 12 15 18 21 KsumDate 2003-10-11 1 0 0 0 1 0 0 0 22003-12-19 0 0 0 0 1 0 0 1 22003-12-18 1 1 0 0 1 0 0 0 32003-03-25 0 1 1 0 1 1 1 0 52003-12-25 1 0 0 0 2 1 0 1 52003-01-08 2 0 1 0 1 0 1 0 52003-01-09 0 0 0 0 0 1 2 2 52003-07-08 0 0 0 0 2 2 1 0 52003-10-12 0 0 0 1 1 0 2 2 62003-01-16 1 0 0 1 2 0 1 1 6 Disturbed days: 00 03 06 09 12 15 18 21 KsumDate 2003-09-17 4 5 4 4 5 5 4 5 362003-11-11 5 4 5 4 4 5 5 4 362003-02-02 5 4 4 4 4 6 5 5 372003-05-30 7 5 3 3 4 6 5 5 382003-05-29 3 3 3 2 6 6 7 8 382003-08-18 5 5 5 5 6 6 6 5 432003-11-20 1 3 5 4 7 9 9 8 462003-10-31 9 6 5 6 7 5 4 4 462003-10-30 8 5 4 4 5 5 9 9 492003-10-29 4 3 9 7 8 8 9 9 57

Note on the Fast Fourier Transform#

In the examples above we computed Fourier coefficients in the ‘traditional’ way, so that if \(F(t)\) is a Fourier series representation of \(f(t)\), then,

\[\begin{align}F(t) &= A_o+\sum_{n=1}^N A_n \cos\left(\frac{2\pi nt}{T}\right)+B_n \sin\left(\frac{2\pi nt}{T}\right)\end{align}\]

where \(T\) is the fundamental period of \(F(t)\). The \(A_n\) and \(B_n\) are estimated by

\[\begin{split}\begin{align}A_o&=\frac{1}{T}\int_0^T f(t) dt\\A_n&=\frac{2}{T}\int_0^T f(t)\cos\left(\frac{2\pi nt}{T}\right) dt\\B_n&=\frac{2}{T}\int_0^T f(t)\sin\left(\frac{2\pi nt}{T}\right) dt\end{align}\end{split}\]

With \(N\) samples of digital data, the integral for \(A_n\) may be replaced by the summation

\[\begin{split}\begin{align}A_n&=\frac{2}{T}\sum_{j=0}^{N-1} f_j\cos\left(\frac{2\pi nj\Delta t}{T}\right) \Delta t\\&=\frac{2}{N}\sum_{j=0}^{N-1} f_j\cos\left(\frac{2\pi nj}{N}\right)\end{align}\end{split}\]

where the sampling interval \(\Delta t\) is given by \(T = N \Delta t\) and \(f_j = f(j \Delta t)\). A similar expression applies for the \(B_n\), and these are the coefficients returned by the function fourier above.

The fast Fourier transform (FFT) offers a computationally efficient means of finding the Fourier coefficients. The conventions for the FFT and its inverse (IFFT) vary from package to package. In the scipy.fftpack package, the FFT of a sequence \(x_n\) of length \(N\) is defined as

\[\begin{split}\begin{align}y_k&=\sum_{n=0}^{N-1} x_n\exp\left(-\frac{2\pi i\thinspace kn}{N}\right)\\&=\sum_{n=0}^{N-1} x_n\left(\cos\left(\frac{2\pi \thinspace kn}{N}\right)-i\sin\left(\frac{2\pi \thinspace kn}{N}\right)\right)\end{align}\end{split}\]

with the inverse defined as,

\[\begin{split}\begin{align}x_n&=\frac{1}{N}\sum_{k=0}^{N-1} y_k\exp\left(\frac{2\pi i\thinspace kn}{N}\right)\\\end{align}\end{split}\]

(The scipy documentation is perhaps a little confusing here because it explains the order of the \(y_n\) as being \(y_1,y_2, \dots y_{N/2-1}\) as corresponding to increasing positive frequency and \(y_{N/2}, y_{N/2+1}, \dots y_{N-1}\) as ordered by decreasing negative frequency, for \(N\) even. See: https://docs.scipy.org/doc/scipy/reference/tutorial/fft.html.)

The interpretation is that if \(y_k=a_k+ib_k\) then will have (for \(N\) even), \(y_{N-k} = a_k-ib_k\) and so

\[\begin{split}\begin{align}a_k&=\frac{1}{2}\text{Re}\left(y_k+y_{N-k}\right)\\b_k&=\frac{1}{2}\text{Im}\left(y_k-y_{N-k}\right)\end{align}\end{split}\]

and so we expect the relationship to the digitised Fourier series coefficients returned by the function fourier defined above to be,

\[\begin{split}\begin{align}A_k&=\phantom{-}\frac{1}{N}\text{Re}\left(a_k+a_{N-k}\right)\\B_k&=-\frac{1}{N}\text{Im}\left(b_k-b_{N-k}\right)\end{align}\end{split}\]

The following shows the equivalence between the conventional Fourier series approach and the FFT.

from scipy.fftpack import fft# Compute the fourier series as before_df = df_obs.loc["2003-01-01"]x = (_df["X"] - _df["X"].mean()).valuesxcofs = fourier(x, 3)# Compute using scipy FFTnpts = len(x)xfft = fft(x)# Compare results for the 24-hour componentk = 1print('Fourier coefficients: \n', f'A1 = {xcofs[1][0]} \n', f'B1 = {xcofs[1][1]} \n')print('scipy FFT outputs: \n', f'a1 = {np.real(xfft[k]+xfft[npts-k])/npts} \n', \ f'b1 = {-np.imag(xfft[k]-xfft[npts-k])/npts} \n')
Fourier coefficients: A1 = 1.9181527666779163 B1 = 7.8492766783913055 scipy FFT outputs: a1 = 1.9181527666779157 b1 = 7.849276678391306 

References#

Menvielle, M. et al. (1995) ‘Computer production of K indices: review and comparison of methods’, Geophysical Journal International. Oxford University Press, 123(3), pp. 866–886. doi: 10.1111/j.1365-246X.1995.tb06895.x.

K Index Calculation — Exploring our Magnetic Earth (2024)
Top Articles
Latest Posts
Article information

Author: The Hon. Margery Christiansen

Last Updated:

Views: 5718

Rating: 5 / 5 (70 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: The Hon. Margery Christiansen

Birthday: 2000-07-07

Address: 5050 Breitenberg Knoll, New Robert, MI 45409

Phone: +2556892639372

Job: Investor Mining Engineer

Hobby: Sketching, Cosplaying, Glassblowing, Genealogy, Crocheting, Archery, Skateboarding

Introduction: My name is The Hon. Margery Christiansen, I am a bright, adorable, precious, inexpensive, gorgeous, comfortable, happy person who loves writing and wants to share my knowledge and understanding with you.