Trial-based Database

Creating a Trial-based database

In this section, we will walk through the process of creating a trial-based database. First, you need to set up a test database.

First, you need to set up a test database. Import the required libraries and create an API instance:

[2]:
from dunderlab.api import aioAPI as API
from dunderlab.api.utils import JSON

api = API('http://localhost:8000/timescaledbapp/')

Register a new source for the database:

[3]:
source_response = await api.source.post({
    'label': 'Test.v2',
    'name': 'Test Database',
    'location': 'Eje Cafetero',
    'device': 'None',
    'protocol': 'None',
    'version': '0.1',
    'description': 'Sample trial-based database for TimeScaleDBApp',
})

JSON(source_response)

{
  "label": "Test.v2",
  "name": "Test Database",
  "location": "Eje Cafetero",
  "device": "None",
  "protocol": "None",
  "version": "0.1",
  "description": "Sample trial-based database for TimeScaleDBApp",
  "created": "2023-06-06T03:37:32.561508Z",
}

Register a new measure:

[4]:
measure_response = await api.measure.post({
    'source': 'Test.v2',
    'label': 'measure_02',
    'name': 'Measure 02',
    'description': 'Simple sinusoidals for 64 channels at different frequencies',
})

JSON(measure_response)

{
  "label": "measure_02",
  "name": "Measure 02",
  "description": "Simple sinusoidals for 64 channels at different frequencies",
  "source": "Test.v2",
}

Register the channels:

[5]:
channels_names = ['Fp1','Fp2','F7','F3','Fz','F4','F8','T7','T8','P7','P3','Pz','P4','P8','O1','O2']

channel_response = await api.channel.post([{
    'source': 'Test.v2',
    'measure': 'measure_02',
    'name': f'Channel {channel}',
    'label': f'{channel}',
    'unit': 'u',
    'sampling_rate': '1000',
} for channel in channels_names])

JSON(channel_response)
[
  {
    "label": "Fp1",
    "name": "Channel Fp1",
    "unit": "u",
    "sampling_rate": 1000.0,
    "description": null,
    "measure": "measure_02",
    "source": "Test.v2",
  },
  {
    "label": "Fp2",
    "name": "Channel Fp2",
    "unit": "u",
    "sampling_rate": 1000.0,
    "description": null,
    "measure": "measure_02",
    "source": "Test.v2",
  },
  {
    "label": "F7",
    "name": "Channel F7",
    "unit": "u",
    "sampling_rate": 1000.0,
    "description": null,
    "measure": "measure_02",
    "source": "Test.v2",
  },
  {
    "label": "F3",
    "name": "Channel F3",
    "unit": "u",
    "sampling_rate": 1000.0,
    "description": null,
    "measure": "measure_02",
    "source": "Test.v2",
  },
  {
    "label": "Fz",
    "name": "Channel Fz",
    "unit": "u",
    "sampling_rate": 1000.0,
    "description": null,
    "measure": "measure_02",
    "source": "Test.v2",
  }, ...]

Now that you have set up the test database and registered the required components (source, measure, and channels), you can proceed with uploading time series data and creating trials. After uploading the data, you can query the trials and reconstruct the data for further analysis. The previous code snippets you provided demonstrate how to perform these tasks.

Creating data structure and class vector

In this section, we will create the required data structure, which is a three-dimensional vector (trial, channel, time), and a vector of classes. The three-dimensional vector represents the number of trials, channels, and time samples in the dataset. The class vector contains the class labels for each trial.

[6]:
trials_per_class = 3
classes = 4
raw_data = np.random.normal(size=(trials_per_class*classes, len(channels_names), 1000)) # trials, channels, time

classes = np.array([f'cLass-{cls}' for cls in range(classes)] *  trials_per_class)
np.random.shuffle(classes)

raw_data.shape, classes.shape
[6]:
((12, 16, 1000), (12,))

This code snippet demonstrates how to generate random data with a shape of (10, 16, 1000) representing 10 trials, 16 channels, and 1000 time samples. The trials are equally divided between two classes. The data is generated using the NumPy library and the random normal distribution function, which creates an array with the specified shape.

In addition to the data, a class vector is created with 10 elements. The vector assigns the first 5 trials to class 0 and the remaining 5 trials to class 1. The class vector will be used later to associate each trial with its corresponding class when analyzing the data.

Uploading trials

In this section, we will demonstrate how to upload data to the database, including the new trial and trial_id arguments. These arguments are used to associate each time series data point with a specific trial and its corresponding class.

[7]:
data = []
for i, (trial, class_) in enumerate(zip(raw_data, classes)):
    data.append({
        'source': 'Test.v2',
        'measure': 'measure_02',
        'timestamps': np.linspace(i, i+1, 1000, endpoint=False).tolist(),
        'chunk': class_,
        'values': {ch: v.tolist() for ch, v in zip(channels_names, trial)}
    })

JSON(data[:3])
[
  {
    "source": "Test.v2",
    "measure": "measure_02",
    "timestamps": [0.0, 0.001, 0.002, 0.003, 0.004, ...],
    "chunk": "cLass-1",
    "values":
    {
      "Fp1": [-1.5177400358889044, -0.788977172993793, -0.03230030278992793, 0.41776901531361754, -0.7912434982958045, ...],
      "Fp2": [0.7053272431256052, 0.564824599958775, 0.24947113891302522, -0.2770154941091011, -0.31160444728795944, ...],
      "F7": [0.9545144291923049, 1.7277966884063762, -1.6484868624637383, 0.016119001124564408, -0.4861008412875246, ...],
      "F3": [-0.9227370477718901, -0.39048273125408756, -0.3404258709300914, 0.2108472949983979, -0.6825153266253882, ...],
      "Fz": [-2.2721685600289145, 1.6465217890489954, 2.4536740401499553, 0.4587805541416744, 0.5222498770782393, ...],
      "F4": [-2.065727623114571, -0.6137497754493103, -0.5120253092698225, -1.2061519414122859, -0.5172526991706605, ...],
      "F8": [-0.2354701771428703, -0.05788085719708232, 1.4967474968045162, 0.2977586287358451, -1.3125657844386658, ...],
      "T7": [-0.8307155278732363, -0.04574153552659414, 0.7328696985634222, 0.8364956491191402, -0.8632108701385004, ...],
      "T8": [0.30415789479160416, -0.09329404781032742, -0.16647565473165887, -1.6772108070825946, -0.7091567813593158, ...],
      "P7": [0.0700610348471906, 0.470415488851524, -0.1429172361871878, -0.23370016340476757, -1.4200292373982424, ...],
      "P3": [-1.132314449965133, 0.8803346510091454, -0.24028711350552515, 1.1973949189400612, 0.5848272708925653, ...],
      "Pz": [-0.3983436027853963, 0.1753010169862216, -1.4235682356310062, 1.3670899833687626, -0.693947897947757, ...],
      "P4": [0.3044198977321669, -1.4638553728107317, -1.5445103158157458, 1.929256154921828, 0.2959167217238077, ...],
      "P8": [0.48336116101348336, -1.2066611879878453, 0.36635186465885605, 2.724188445610945, 1.7580035589679361, ...],
      "O1": [0.9107241319780226, 0.7784827648187811, 0.7271251801032198, -0.08952695919756336, 0.17023837167011382, ...],
      "O2": [-0.7224316241017974, 0.0359995110299849, -0.2719821240416557, -1.1524440036119972, 0.02657889109615089, ...],
    },
  },
  {
    "source": "Test.v2",
    "measure": "measure_02",
    "timestamps": [1.0, 1.001, 1.002, 1.003, 1.004, ...],
    "chunk": "cLass-3",
    "values":
    {
      "Fp1": [0.15757460335790932, 0.18407385559018555, -1.858944952817503, -0.18259325723811679, -0.08942599282561842, ...],
      "Fp2": [2.0456599951234318, -0.4177866686752268, 0.3533553580161102, 0.5792206187603901, 1.0543925748846936, ...],
      "F7": [0.4599550027154971, 0.4251086273656599, -1.4094730434307197, -0.6169529008616196, 0.7729269676818712, ...],
      "F3": [-0.3039814608541564, -0.9333737545789742, -0.012898438499063875, -0.6259690371142113, -0.5298860733825229, ...],
      "Fz": [0.0024979874394115103, -0.20726436971216827, 0.772497223798602, 0.7412561376618516, -0.8869863084166103, ...],
      "F4": [2.6089088375442553, 0.036566784290045874, 0.6302170809965103, -0.6710721895294811, 1.7268159831447145, ...],
      "F8": [0.7896841377699716, -0.09129662874036715, 1.233927154332806, -0.3618314977793315, 0.674114968280682, ...],
      "T7": [1.1444783489417711, 0.5085066607796808, -0.2015821334643892, 1.7970663852867632, -0.26226900743729886, ...],
      "T8": [0.8158189598655726, -0.632215885524743, -1.141325855939806, 1.7050139246846285, 1.1412048413703344, ...],
      "P7": [-0.25165686929478176, -0.6591896473008744, -0.1740432885135561, -1.1425806671083074, 0.9405016227455512, ...],
      "P3": [0.1890760235639249, 0.222556871114821, 0.3707604527713291, -0.37870912320741856, -1.1527907529741483, ...],
      "Pz": [0.9235829829987812, -0.7356827549036166, -1.739015788252254, -0.0736211248630558, -0.14240562228467862, ...],
      "P4": [-0.1636040040133006, -0.6600672237685036, 0.6750399733129159, -0.6405379531774668, 0.3808400944712207, ...],
      "P8": [1.170918422140867, -0.9590715008415003, -1.5070474413912516, 0.4583701619859625, -1.8998856249603135, ...],
      "O1": [-0.18090058711052673, -0.33376910589063075, 0.9521788832787587, -0.9059892366535587, 0.2505711384976162, ...],
      "O2": [-0.632352418867232, 1.3594451362707227, 0.8922169470760243, 0.6329083695299655, 1.1949299767988726, ...],
    },
  },
  {
    "source": "Test.v2",
    "measure": "measure_02",
    "timestamps": [2.0, 2.001, 2.002, 2.003, 2.004, ...],
    "chunk": "cLass-1",
    "values":
    {
      "Fp1": [-1.3749966783068264, 0.9200507823249415, -0.6619644915989598, -1.5929987819779858, -1.3293621563339744, ...],
      "Fp2": [-0.43236545715519153, -1.243815552719948, -0.3424818244410737, -1.0324664151008076, 0.7796609490822304, ...],
      "F7": [-1.306716328242299, -0.02633902402859367, 1.111812137935585, -0.678690911394305, -0.8775559069138437, ...],
      "F3": [0.8179691697280715, 0.27352634260811515, 2.05527665872821, -0.20851510067114823, -1.4056695644591788, ...],
      "Fz": [0.8573214258438003, 0.573852281775652, 0.6704757504898707, 0.47523381811961624, -0.1286658165361358, ...],
      "F4": [-0.14166487231873523, 0.3963015708828215, -0.5293816741374673, 0.19635244928198567, -0.8917110967528111, ...],
      "F8": [0.3885949317176455, -0.901823660687875, -0.1219507850243762, 1.3929476161642491, 0.41162903781471644, ...],
      "T7": [-1.3337170645002665, 0.8165255248369955, -1.1176306072113353, -0.3795837046323856, -0.4566266175595155, ...],
      "T8": [1.1249338411898984, 0.344537871173198, -0.07098561653976346, -0.38779666780194233, -0.6355755476180854, ...],
      "P7": [-0.14398780214116585, -0.924721386936874, -1.1839103631445513, 1.6752634303512914, -1.2222067725095023, ...],
      "P3": [1.495976473221507, -1.5734777115485588, -0.06460792674748969, -0.5527134141055418, -0.397318576300021, ...],
      "Pz": [0.8471913192896605, -0.7233281616999921, -1.6263810449134155, 1.0538362235504746, 0.488596369028829, ...],
      "P4": [0.6294948203897025, 0.40086254767128854, 0.5706187740182544, 1.5943150662717909, 0.19149878522002922, ...],
      "P8": [0.4481827814193037, 0.9022776557239659, -0.5231512851390405, -1.4433808499009646, -0.02148460182664529, ...],
      "O1": [-0.7147077827281976, 0.8505623617716511, -0.6165448547898463, -1.7839471902484112, 0.4576363329578695, ...],
      "O2": [-1.4338843266045338, 0.4797565809985554, 1.316601280934398, -1.0713868599067837, -1.191140912230335, ...],
    },
  }]

This code snippet demonstrates how to create a list of time series data, including the new trial and trial_id arguments. The trial_id is incremented for each trial in the data, and the trial argument is assigned either ‘Right’ or ‘Left’ based on the class. The resulting time_series list contains data points with the necessary information for each trial, channel, and time sample.

In this section, we will upload the time series data to the database, following the same procedure as before.

[8]:
await api.timeserie.post(data, batch_size=32)
[8]:
[[{'status': 'success',
   'message': 'Your data has been successfully saved.',
   'objects_created': 16000},
  {'status': 'success',
   'message': 'Your data has been successfully saved.',
   'objects_created': 16000},
  {'status': 'success',
   'message': 'Your data has been successfully saved.',
   'objects_created': 16000},
  {'status': 'success',
   'message': 'Your data has been successfully saved.',
   'objects_created': 16000},
  {'status': 'success',
   'message': 'Your data has been successfully saved.',
   'objects_created': 16000},
  {'status': 'success',
   'message': 'Your data has been successfully saved.',
   'objects_created': 16000},
  {'status': 'success',
   'message': 'Your data has been successfully saved.',
   'objects_created': 16000},
  {'status': 'success',
   'message': 'Your data has been successfully saved.',
   'objects_created': 16000},
  {'status': 'success',
   'message': 'Your data has been successfully saved.',
   'objects_created': 16000},
  {'status': 'success',
   'message': 'Your data has been successfully saved.',
   'objects_created': 16000},
  {'status': 'success',
   'message': 'Your data has been successfully saved.',
   'objects_created': 16000},
  {'status': 'success',
   'message': 'Your data has been successfully saved.',
   'objects_created': 16000}]]

This code snippet demonstrates how to upload the time series data to the database using the api.timeserie.post() method with a specified batch size of 32. The batch size determines how many data points are uploaded to the database in a single request, which can help improve the efficiency of the upload process.

Querying trials

In this section, we will demonstrate how to query the trials based on certain parameters. These parameters can be used to filter and retrieve specific data from the database. In this example, we will focus on querying trials based on the ‘OpenBCI’ source and ‘Left’ and ‘Right’ trial classes.

[9]:
trials_response = await api.timeserie.get({
    'source': 'Test.v2',
    'measure': 'measure_02',
    'chunks': ['cLass-0', 'cLass-1', 'cLass-2'],
    'channels': [
        'Fp1',
        'Fp2',
    ],
    'timestamps': 'false',
    # 'page_size': 2,
})

JSON(trials_response, max_list_len=3)

{
  "count": 9,
  "next": null,
  "previous": null,
  "results": [
    {
      "source": "Test.v2",
      "measure": "measure_02",
      "timestamps": [],
      "values":
      {
        "Fp1": [-1.5177400358889044, -0.788977172993793, -0.03230030278992793, ...],
        "Fp2": [0.7053272431256052, 0.564824599958775, 0.24947113891302522, ...],
      },
      "chunk": "cLass-1",
    },
    {
      "source": "Test.v2",
      "measure": "measure_02",
      "timestamps": [],
      "values":
      {
        "Fp1": [-1.3749966783068264, 0.9200507823249415, -0.6619644915989598, ...],
        "Fp2": [-0.43236545715519153, -1.243815552719948, -0.3424818244410737, ...],
      },
      "chunk": "cLass-1",
    },
    {
      "source": "Test.v2",
      "measure": "measure_02",
      "timestamps": [],
      "values":
      {
        "Fp1": [-0.9630703498756138, -2.2598214681790183, -1.0793328027824352, ...],
        "Fp2": [-1.2945145713763562, 0.1693452110584934, -0.5178774846086539, ...],
      },
      "chunk": "cLass-2",
    }, ...],
}

In this example, the trials are queried based on the ‘OpenBCI’ source and ‘Left’ and ‘Right’ trial classes. The channel parameter is commented out, meaning that all channels will be included in the query. The time parameter is set to ‘false’ to exclude time information from the response. The resulting trials_response contains a list of trials with the specified filtering parameters applied, providing a convenient way to analyze and process the data.

Reconstructing data from queried trials

To reconstruct the data, we will iterate through the trials in the response and extract the channel values and trial classes. This process allows us to reassemble the data into a suitable format for further analysis or processing.

[10]:
trials = []
classes = []
for trial in trials_response['results']:
    trials.append(list(trial['values'].values()))
    classes.append(trial['chunk'])

np.array(trials).shape, np.array(classes).shape
[10]:
((9, 2, 1000), (9,))

Utilizing the get_data function from the Dunderlab API

The script below uses the get_data function from the Dunderlab API to achieve the same goal as the code above. This function simplifies the process of reconstructing the data from the queried trials.

[11]:
from dunderlab.api.utils import get_data

trials, classes = get_data(trials_response)
trials.shape, classes.shape
[11]:
((9, 2, 1000), (9,))

In this example, the reconstructed data has a shape of (10, 16, 1000), representing 10 trials, each with 16 channels and 1000 time points. The reconstructed classes array has a shape of (10,), indicating 10 trial classes. This reconstructed data can now be used for further analysis or processing, such as machine learning or visualization tasks.