|
| 1 | +Analysis |
| 2 | +======== |
| 3 | + |
| 4 | +Raw Data |
| 5 | +-------- |
| 6 | +There were 167 out of 664 malformed `.xml` files. By removing the low level |
| 7 | +log entries the number of malformed dropped to 12. This sessions have either a |
| 8 | +final high level entry that is incomplete or a single unclosed tag. The most |
| 9 | +appropiate procedure is to discard these. |
| 10 | + |
| 11 | +Initial |
| 12 | +------- |
| 13 | +Before performing some initial analysis on the general characteristics of the |
| 14 | +raw data because of the nature of the data much preprocessing is needed. |
| 15 | + |
| 16 | +The data comes as a single `.xml` file per session, they contain all the |
| 17 | +low-level inputs and high-level actions by the player in a chronological order. |
| 18 | + |
| 19 | +The events that best represent the flow of the game and that will be used are |
| 20 | +the high level ones. |
| 21 | +For *La Dama Boba* the set, inc, and event type tells the score of the different |
| 22 | +measures found on the game. |
| 23 | + |
| 24 | +There are two strategies to tackle the data, one is getting the final values |
| 25 | +of these measures and treat them as parameters, making it easy to collect and |
| 26 | +save all the sessions on a single `.csv` files. |
| 27 | + |
| 28 | +But because set events stablish a initial value and the inc, dec alter the |
| 29 | +value the most appropriate strategy for treating the data is to take it as a |
| 30 | +time series, this makes sense as there is autocorrelation between observations. |
| 31 | +In turn a series of additional challenges appear, such that the space between |
| 32 | +time measurements is not even (Although it could be transformed to do so), |
| 33 | +the need for a cross-validation technique that takes into account time series, |
| 34 | +etc... |
0 commit comments