Showing posts with label Time Series Forecasting. Show all posts
Showing posts with label Time Series Forecasting. Show all posts

Friday, July 21, 2023

Transformers for Time Series Forecasting: Predicting Stock Prices with AI Example

 Transformers have shown promising results in various natural language processing (NLP) tasks, but they can also be adapted for time series forecasting. Let's take a look at an example of using a transformer model for predicting stock prices using Python and the PyTorch library. In this example, we'll use the 'transformers' library, which contains pre-trained transformer models.


First, make sure you have the required libraries installed:

pip install torch transformers numpy pandas

Now, let's proceed with the code example:

import numpy as np import pandas as pd from transformers import BertTokenizer, BertForSequenceClassification from torch.utils.data import DataLoader, TensorDataset import torch # Load the stock price data (for illustration purposes, you should have your own dataset) # The dataset should have two columns: 'Date' and 'Price'. data = pd.read_csv('stock_price_data.csv') data['Date'] = pd.to_datetime(data['Date']) data.sort_values('Date', inplace=True) data.reset_index(drop=True, inplace=True) # Normalize the stock prices data['Price'] = (data['Price'] - data['Price'].min()) / (data['Price'].max() - data['Price'].min()) # Prepare the data for training window_size = 10 # Number of past prices to consider for each prediction X, y = [], [] for i in range(len(data) - window_size): X.append(data['Price'][i:i + window_size].values) y.append(data['Price'][i + window_size]) X, y = np.array(X), np.array(y) # Convert the data to PyTorch tensors X = torch.tensor(X, dtype=torch.float32) y = torch.tensor(y, dtype=torch.float32) # Define the transformer model model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=1) # Define the tokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # Tokenize the inputs input_ids = [] attention_masks = [] for seq in X: encoded_dict = tokenizer.encode_plus( seq.tolist(), add_special_tokens=True, max_length=window_size, padding='max_length', return_attention_mask=True, return_tensors='pt', ) input_ids.append(encoded_dict['input_ids']) attention_masks.append(encoded_dict['attention_mask']) input_ids = torch.cat(input_ids, dim=0) attention_masks = torch.cat(attention_masks, dim=0) # Create the DataLoader dataset = TensorDataset(input_ids, attention_masks, y) dataloader = DataLoader(dataset, batch_size=32, shuffle=True) # Define the loss function and optimizer loss_function = torch.nn.MSELoss() optimizer = torch.optim.AdamW(model.parameters(), lr=1e-5) # Training loop num_epochs = 10 for epoch in range(num_epochs): total_loss = 0 for batch in dataloader: model.zero_grad() inputs = {'input_ids': batch[0], 'attention_mask': batch[1]} outputs = model(**inputs) predicted_prices = outputs.logits.squeeze(1) loss = loss_function(predicted_prices, batch[2]) total_loss += loss.item() loss.backward() optimizer.step() print(f'Epoch {epoch + 1}/{num_epochs}, Loss: {total_loss:.4f}') # Make predictions on future data num_future_points = 5 future_data = data['Price'][-window_size:].values for _ in range(num_future_points): inputs = torch.tensor(future_data[-window_size:], dtype=torch.float32).unsqueeze(0) inputs = inputs.unsqueeze(0) with torch.no_grad(): outputs = model(inputs) predicted_price = outputs.logits.item() future_data = np.append(future_data, predicted_price) # De-normalize the data future_data = future_data * (data['Price'].max() - data['Price'].min()) + data['Price'].min() print("Predicted stock prices for the next", num_future_points, "days:") print(future_data[-num_future_points:])



The key part where transformers make a difference in this example is during tokenization and sequence processing. In this case, we are using the BertTokenizer to convert the historical stock prices into tokenized sequences suitable for feeding into the BertForSequenceClassification model.

Transformers, like BERT, are designed to handle sequential data with dependencies between the elements. By using transformers, we are allowing the model to capture long-range dependencies and patterns within the stock price time series. The model can learn to consider not only the immediate past prices but also the relationships between various historical prices in the window_size to make better predictions.

Using traditional methods like ARIMA or even feedforward neural networks might not be as effective in capturing such long-range dependencies, especially when dealing with large time series data. Transformers' self-attention mechanism allows them to attend to relevant parts of the input sequence and learn meaningful representations, which can be crucial for accurate time series forecasting.


ASP.NET Core

 Certainly! Here are 10 advanced .NET Core interview questions covering various topics: 1. **ASP.NET Core Middleware Pipeline**: Explain the...