You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This repository provides a comprehensive evaluation framework for positional encoding methods in transformer-based time series models, along with implementations and benchmarking results.
from models.transformer import TimeSeriesTransformer
78
+
79
+
# Use in transformer
80
+
model = TimeSeriesTransformer(
81
+
input_timesteps=SEQ_LENGTH, # Sequence length
82
+
in_channels=INPUT_CHANNELS, # Number of input channels
83
+
patch_size=PATCH_SIZE, # Patch size for embedding
84
+
embedding_dim=EMBED_DIM. # Embedding dimension
85
+
num_transformer_layers=NUM_LAYERS, # Number of transformer layers (4, 8, etc.)
86
+
num_heads=N_HEADS, # Number of attention heads
87
+
num_layers=NUM_LAYERS, # Number of transformer layers
88
+
dim_feedforward=DIM_FF, # Feedforward dimension
89
+
dropout=DROPOUT, # Dropout rate (0.1, 0.2, etc.)
90
+
num_classes=NUM_CLASSES# Number of output classes
91
+
pos_encoding='PE_Name', # Positional encoding type
92
+
)
93
+
94
+
# Forward pass
95
+
x = torch.randn(BATCH_SIZE, SEQ_LENGTH, INPUT_CHANNELS) # (batch, sequence, features)
96
+
output = model(x)
97
+
```
98
+
73
99
## Results
74
100
75
-
Our experimental evaluation encompasses eight distinct positional encoding methods tested across eleven diverse time series datasets using two transformer architectures.
101
+
Our experimental evaluation encompasses ten distinct positional encoding methods tested across eleven diverse time series datasets using two transformer architectures.
0 commit comments