Presentation
A time-series classification challenge is organized in the context of the workshop.
Participants are given a training set of labeled multivariate time series (train.h5) representing isolated gestures captured with a Kinect system by different users.
Two tasks are considered:
- Task 1 is a standard Time Series Classification task, for which test_task1.h5 should be used ;
- Task 2 adds time series spotting to the challenge: test time series (provided in test_task2.h5) are the concatenation of different gestures and participants are asked to spot both the gesture classes that form the time series and their locations.
The best teams will have the opportunity to present their methods in a special session during the workshop.
For more information or for participating to this challenge, send a mail to Romain Tavenard (romain[dot]tavenard[at]univ-rennes2[dot]fr) and Simon Malinowski (simon[dot]malinowski[at]irisa[dot]fr).
Data sets are the property of IRISA, research team EXPRESSION. Participants of the challenge are committed not to divulgate it.
Important dates
The official leaderboard (using the full test set) for this challenge will be built based on the results provided before September, 1st, 2016.
Based on this leaderboard, best teams will be offered the opportunity to present their methods during the workshop.
Official leaderboard
The following leaderboard has been computed on the whole test set.
Task 1
Rank | Team name | Method name | Accuracy | Number of submitted runs (max. 10) |
1. | UCRDMYeh | bofSC + randShape | 0.961 | 1 |
2. | Mustafa Baydogan | SMTS | 0.956 | 3 |
– | Lemaire-Boullé, Orange Labs | Automatic Feature Construction + Selective Naive Bayes |
0.956 | 2 |
4. | HU-WBI | MWSL | 0.950 | 3 |
5. | CIML | RC | 0.944 | 3 |
– | UCRDMYeh | convNet | 0.944 | 1 |
– | UCRDMYeh | bofSC | 0.944 | 1 |
8. | UEA | COTE | 0.939 | 4 |
9. | UEA | HESCA | 0.933 | 2 |
– | Josif Grabocka | LearningShapelets | 0.933 | 1 |
11. | UCRDM | pDTWKerSVM + RandSub | 0.928 | 2 |
– | UEA | Rotation Forest Benchmark | 0.928 | 3 |
13. | HU-WBI | BOSS | 0.911 | 4 |
14. | DDIG | Softmax+RandShapes | 0.906 | 1 |
15. | HU-WBI | BOSS-DTW | 0.894 | 2 |
16. | SAMAS | GK-kNC | 0.889 | 2 |
17. | UCRDMYeh | randShape | 0.883 | 3 |
18. | Baseline | DTW-1NN | 0.872 | – |
19. | WP-lab | MsV | 0.861 | 2 |
20. | WP-lab | MsV+csp+lr | 0.839 | 1 |
– | Baseline | ED-1NN | 0.839 | – |
22. | WP-lab | MsV+csp | 0.833 | 2 |
23. | AMA-IKATS | Classification trees for time series | 0.800 | 1 |
24. | BogaziciUni | mv-ARF | 0.789 | 2 |
Task 2
The evaluation method is based on an edit distance that enables first to align two sequences of labeled temporal segments, and then, by back-tracing an optimal alignment path, to provide a confusion matrix at the label level. From this confusion matrix, standard evaluation measures such as precision, recall and F1 measures can easily be derived as well as other measures such as the latency of the detection that can be quite important in pattern (early) detection applications. For this ranking, F1 measure is used.
Rank | Team name | Method name | F1 | Number of submitted runs (max. 10) |
1. | UCRDMYeh | bofSC + randShape | 0.959 | 1 |
– | UCRDM | pDTWKerSVM + RandSub | 0.959 | 2 |
3. | UCRDMYeh | bofSC | 0.956 | 2 |
4. | UCRDMYeh | randShape | 0.865 | 2 |
Data format
Data is distributed as HDF5 files (train.h5, test_task1.h5 and test_task2.h5). Sample code for reading these files is provided for Python (here) and R (here). Matlab users should be able to adapt this code for their scripts using hdf5read.
Datasets are 3-dimensional arrays of size (n, t, d) where n is the number of time series, t the number of time instants and d the number of features (i.e. number of sensors x 3 here). Hence, each time series is a matrix of t rows and d (= 24 = 8 sensors * 3) columns. Column ordering is as follows:
1. Hand tip left, X coordinate 2. Hand tip left, Y coordinate 3. Hand tip left, Z coordinate 4. Hand tip right, X coordinate 5. Hand tip right, Y coordinate 6. Hand tip right, Z coordinate 7. Elbow left, X coordinate 8. Elbow left, Y coordinate 9. Elbow left, Z coordinate 10. Elbow right, X coordinate 11. Elbow right, Y coordinate 12. Elbow right, Z coordinate 13. Wrist left, X coordinate 14. Wrist left, Y coordinate 15. Wrist left, Z coordinate 16. Wrist right, X coordinate 17. Wrist right, Y coordinate 18. Wrist right, Z coordinate 19. Thumb left, X coordinate 20. Thumb left, Y coordinate 21. Thumb left, Z coordinate 22. Thumb right, X coordinate 23. Thumb right, Y coordinate 24. Thumb right, Z coordinate
Submission
Each team (i.e. each pair research group / method) is allowed to submit up to 10 runs per task. Unless otherwise stated, the last submitted run will always be the one considered for the ranking.
Task 1
To submit a run, one should send organizers a mail with results attached in a text file containing predicted classes for test time series, one per row, such as:
5 4 3 3 ...
Task 2
To submit a run, one should send organizers a mail with results attached in a text file containing predicted classes for test time series (one row per test time series, 6 gestures per row) and their start/end times, such as:
5:0-65 2:66-71 3:72-112 5:113-164 6:165-220 2:224-305 # Occurrence of class 5 detected in frames 0 to 65, etc. ...
Note that there should be no overlap between predictions and predictions should be ordered by increasing starting time.